Dec 12 17:25:14.443369 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 12 17:25:14.443393 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Dec 12 15:17:36 -00 2025 Dec 12 17:25:14.443403 kernel: KASLR enabled Dec 12 17:25:14.443409 kernel: efi: EFI v2.7 by EDK II Dec 12 17:25:14.443415 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438351218 Dec 12 17:25:14.443421 kernel: random: crng init done Dec 12 17:25:14.443428 kernel: secureboot: Secure boot disabled Dec 12 17:25:14.443434 kernel: ACPI: Early table checksum verification disabled Dec 12 17:25:14.443440 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Dec 12 17:25:14.443447 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Dec 12 17:25:14.443454 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443460 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443466 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443472 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443481 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443487 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443494 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443500 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443507 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443513 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 12 17:25:14.443519 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Dec 12 17:25:14.443526 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Dec 12 17:25:14.443532 kernel: ACPI: Use ACPI SPCR as default console: Yes Dec 12 17:25:14.443540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Dec 12 17:25:14.443546 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Dec 12 17:25:14.443553 kernel: Zone ranges: Dec 12 17:25:14.443559 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 12 17:25:14.443566 kernel: DMA32 empty Dec 12 17:25:14.443572 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Dec 12 17:25:14.443578 kernel: Device empty Dec 12 17:25:14.443585 kernel: Movable zone start for each node Dec 12 17:25:14.443591 kernel: Early memory node ranges Dec 12 17:25:14.443597 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Dec 12 17:25:14.443604 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Dec 12 17:25:14.443610 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Dec 12 17:25:14.443618 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Dec 12 17:25:14.443624 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Dec 12 17:25:14.443631 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Dec 12 17:25:14.443637 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Dec 12 17:25:14.443644 kernel: psci: probing for conduit method from ACPI. Dec 12 17:25:14.443653 kernel: psci: PSCIv1.3 detected in firmware. Dec 12 17:25:14.443661 kernel: psci: Using standard PSCI v0.2 function IDs Dec 12 17:25:14.443668 kernel: psci: Trusted OS migration not required Dec 12 17:25:14.443675 kernel: psci: SMC Calling Convention v1.1 Dec 12 17:25:14.443681 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 12 17:25:14.443688 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 12 17:25:14.443695 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 12 17:25:14.443702 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Dec 12 17:25:14.443709 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Dec 12 17:25:14.443717 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Dec 12 17:25:14.443724 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Dec 12 17:25:14.443745 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 12 17:25:14.443752 kernel: Detected PIPT I-cache on CPU0 Dec 12 17:25:14.443759 kernel: CPU features: detected: GIC system register CPU interface Dec 12 17:25:14.443765 kernel: CPU features: detected: Spectre-v4 Dec 12 17:25:14.443772 kernel: CPU features: detected: Spectre-BHB Dec 12 17:25:14.443779 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 12 17:25:14.443786 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 12 17:25:14.443800 kernel: CPU features: detected: ARM erratum 1418040 Dec 12 17:25:14.443807 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 12 17:25:14.443817 kernel: alternatives: applying boot alternatives Dec 12 17:25:14.443825 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:25:14.443832 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 12 17:25:14.443839 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 12 17:25:14.443846 kernel: Fallback order for Node 0: 0 Dec 12 17:25:14.443853 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Dec 12 17:25:14.443860 kernel: Policy zone: Normal Dec 12 17:25:14.443866 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 12 17:25:14.443873 kernel: software IO TLB: area num 4. Dec 12 17:25:14.443880 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Dec 12 17:25:14.443888 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 12 17:25:14.443895 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 12 17:25:14.443903 kernel: rcu: RCU event tracing is enabled. Dec 12 17:25:14.443910 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 12 17:25:14.443917 kernel: Trampoline variant of Tasks RCU enabled. Dec 12 17:25:14.443924 kernel: Tracing variant of Tasks RCU enabled. Dec 12 17:25:14.443930 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 12 17:25:14.443937 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 12 17:25:14.443944 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:25:14.443951 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 12 17:25:14.443958 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 12 17:25:14.443966 kernel: GICv3: 256 SPIs implemented Dec 12 17:25:14.443973 kernel: GICv3: 0 Extended SPIs implemented Dec 12 17:25:14.443979 kernel: Root IRQ handler: gic_handle_irq Dec 12 17:25:14.443986 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 12 17:25:14.443993 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Dec 12 17:25:14.444000 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 12 17:25:14.444007 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 12 17:25:14.444014 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Dec 12 17:25:14.444021 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Dec 12 17:25:14.444028 kernel: GICv3: using LPI property table @0x0000000100130000 Dec 12 17:25:14.444034 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Dec 12 17:25:14.444041 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 12 17:25:14.444049 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:25:14.444056 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 12 17:25:14.444063 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 12 17:25:14.444070 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 12 17:25:14.444077 kernel: arm-pv: using stolen time PV Dec 12 17:25:14.444084 kernel: Console: colour dummy device 80x25 Dec 12 17:25:14.444091 kernel: ACPI: Core revision 20240827 Dec 12 17:25:14.444099 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 12 17:25:14.444107 kernel: pid_max: default: 32768 minimum: 301 Dec 12 17:25:14.444130 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 12 17:25:14.444138 kernel: landlock: Up and running. Dec 12 17:25:14.444148 kernel: SELinux: Initializing. Dec 12 17:25:14.444156 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:25:14.444163 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 12 17:25:14.444170 kernel: rcu: Hierarchical SRCU implementation. Dec 12 17:25:14.444178 kernel: rcu: Max phase no-delay instances is 400. Dec 12 17:25:14.444189 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Dec 12 17:25:14.444196 kernel: Remapping and enabling EFI services. Dec 12 17:25:14.444203 kernel: smp: Bringing up secondary CPUs ... Dec 12 17:25:14.444210 kernel: Detected PIPT I-cache on CPU1 Dec 12 17:25:14.444218 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 12 17:25:14.444225 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Dec 12 17:25:14.444232 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:25:14.444241 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 12 17:25:14.444248 kernel: Detected PIPT I-cache on CPU2 Dec 12 17:25:14.444261 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 12 17:25:14.444269 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Dec 12 17:25:14.444277 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:25:14.444284 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 12 17:25:14.444292 kernel: Detected PIPT I-cache on CPU3 Dec 12 17:25:14.444299 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 12 17:25:14.444308 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Dec 12 17:25:14.444316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 12 17:25:14.444323 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 12 17:25:14.444331 kernel: smp: Brought up 1 node, 4 CPUs Dec 12 17:25:14.444338 kernel: SMP: Total of 4 processors activated. Dec 12 17:25:14.444345 kernel: CPU: All CPU(s) started at EL1 Dec 12 17:25:14.444354 kernel: CPU features: detected: 32-bit EL0 Support Dec 12 17:25:14.444361 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 12 17:25:14.444369 kernel: CPU features: detected: Common not Private translations Dec 12 17:25:14.444376 kernel: CPU features: detected: CRC32 instructions Dec 12 17:25:14.444383 kernel: CPU features: detected: Enhanced Virtualization Traps Dec 12 17:25:14.444391 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 12 17:25:14.444398 kernel: CPU features: detected: LSE atomic instructions Dec 12 17:25:14.444407 kernel: CPU features: detected: Privileged Access Never Dec 12 17:25:14.444414 kernel: CPU features: detected: RAS Extension Support Dec 12 17:25:14.444422 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 12 17:25:14.444429 kernel: alternatives: applying system-wide alternatives Dec 12 17:25:14.444437 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Dec 12 17:25:14.444445 kernel: Memory: 16324496K/16777216K available (11200K kernel code, 2456K rwdata, 9084K rodata, 12416K init, 1038K bss, 429936K reserved, 16384K cma-reserved) Dec 12 17:25:14.444452 kernel: devtmpfs: initialized Dec 12 17:25:14.444462 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 12 17:25:14.444469 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 12 17:25:14.444477 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 12 17:25:14.444484 kernel: 0 pages in range for non-PLT usage Dec 12 17:25:14.444491 kernel: 515184 pages in range for PLT usage Dec 12 17:25:14.444499 kernel: pinctrl core: initialized pinctrl subsystem Dec 12 17:25:14.444506 kernel: SMBIOS 3.0.0 present. Dec 12 17:25:14.444514 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 12 17:25:14.444523 kernel: DMI: Memory slots populated: 1/1 Dec 12 17:25:14.444534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 12 17:25:14.444542 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Dec 12 17:25:14.444550 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 12 17:25:14.444557 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 12 17:25:14.444565 kernel: audit: initializing netlink subsys (disabled) Dec 12 17:25:14.444572 kernel: audit: type=2000 audit(0.036:1): state=initialized audit_enabled=0 res=1 Dec 12 17:25:14.444581 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 12 17:25:14.444588 kernel: cpuidle: using governor menu Dec 12 17:25:14.444596 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 12 17:25:14.444603 kernel: ASID allocator initialised with 32768 entries Dec 12 17:25:14.444611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 12 17:25:14.444618 kernel: Serial: AMBA PL011 UART driver Dec 12 17:25:14.444626 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 12 17:25:14.444635 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 12 17:25:14.444642 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 12 17:25:14.444650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 12 17:25:14.444660 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 12 17:25:14.444668 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 12 17:25:14.444675 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 12 17:25:14.444683 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 12 17:25:14.444690 kernel: ACPI: Added _OSI(Module Device) Dec 12 17:25:14.444698 kernel: ACPI: Added _OSI(Processor Device) Dec 12 17:25:14.444706 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 12 17:25:14.444713 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 12 17:25:14.444720 kernel: ACPI: Interpreter enabled Dec 12 17:25:14.444728 kernel: ACPI: Using GIC for interrupt routing Dec 12 17:25:14.444735 kernel: ACPI: MCFG table detected, 1 entries Dec 12 17:25:14.444743 kernel: ACPI: CPU0 has been hot-added Dec 12 17:25:14.444751 kernel: ACPI: CPU1 has been hot-added Dec 12 17:25:14.444759 kernel: ACPI: CPU2 has been hot-added Dec 12 17:25:14.444766 kernel: ACPI: CPU3 has been hot-added Dec 12 17:25:14.444774 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 12 17:25:14.444781 kernel: printk: legacy console [ttyAMA0] enabled Dec 12 17:25:14.444789 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 12 17:25:14.444951 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 12 17:25:14.445051 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 12 17:25:14.445151 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 12 17:25:14.445242 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 12 17:25:14.445323 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 12 17:25:14.445332 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 12 17:25:14.445340 kernel: PCI host bridge to bus 0000:00 Dec 12 17:25:14.445429 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 12 17:25:14.445503 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 12 17:25:14.445574 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 12 17:25:14.445645 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 12 17:25:14.445744 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Dec 12 17:25:14.445843 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.445929 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Dec 12 17:25:14.446009 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 12 17:25:14.446095 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Dec 12 17:25:14.446203 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Dec 12 17:25:14.446295 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.446380 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Dec 12 17:25:14.446460 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 12 17:25:14.446539 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Dec 12 17:25:14.446625 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.446705 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Dec 12 17:25:14.446784 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 12 17:25:14.446876 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Dec 12 17:25:14.446955 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Dec 12 17:25:14.447041 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.447130 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Dec 12 17:25:14.447213 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 12 17:25:14.447295 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Dec 12 17:25:14.447386 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.447465 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Dec 12 17:25:14.447546 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 12 17:25:14.447625 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Dec 12 17:25:14.447712 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Dec 12 17:25:14.447835 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.447924 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Dec 12 17:25:14.448007 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 12 17:25:14.448086 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Dec 12 17:25:14.448178 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Dec 12 17:25:14.448264 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.448348 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Dec 12 17:25:14.448428 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 12 17:25:14.448516 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.448595 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Dec 12 17:25:14.448721 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 12 17:25:14.448818 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.448903 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Dec 12 17:25:14.448982 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 12 17:25:14.449066 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.449163 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Dec 12 17:25:14.449247 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 12 17:25:14.449332 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.449412 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Dec 12 17:25:14.449489 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 12 17:25:14.449571 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.449650 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Dec 12 17:25:14.449731 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 12 17:25:14.449817 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.449907 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Dec 12 17:25:14.449997 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 12 17:25:14.450086 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.450187 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Dec 12 17:25:14.450274 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 12 17:25:14.450363 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.450450 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Dec 12 17:25:14.450541 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 12 17:25:14.450633 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.450727 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Dec 12 17:25:14.450808 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 12 17:25:14.450912 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.450993 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Dec 12 17:25:14.451071 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 12 17:25:14.451169 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.451254 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Dec 12 17:25:14.451332 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 12 17:25:14.451418 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Dec 12 17:25:14.451504 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Dec 12 17:25:14.451592 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.451671 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Dec 12 17:25:14.451764 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 12 17:25:14.451845 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Dec 12 17:25:14.451923 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Dec 12 17:25:14.452008 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.452094 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Dec 12 17:25:14.452184 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 12 17:25:14.452264 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Dec 12 17:25:14.452341 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Dec 12 17:25:14.452424 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.452503 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Dec 12 17:25:14.452580 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 12 17:25:14.452658 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Dec 12 17:25:14.452738 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Dec 12 17:25:14.452823 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.452901 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Dec 12 17:25:14.452979 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 12 17:25:14.453056 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Dec 12 17:25:14.453143 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Dec 12 17:25:14.453230 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.453309 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Dec 12 17:25:14.453387 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 12 17:25:14.453465 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Dec 12 17:25:14.453543 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Dec 12 17:25:14.453630 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.453713 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Dec 12 17:25:14.453803 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 12 17:25:14.453892 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Dec 12 17:25:14.453972 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Dec 12 17:25:14.454061 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.454157 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Dec 12 17:25:14.454240 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 12 17:25:14.454320 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Dec 12 17:25:14.454400 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:25:14.454489 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.454568 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Dec 12 17:25:14.454646 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 12 17:25:14.454726 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Dec 12 17:25:14.454804 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:25:14.454887 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.454965 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Dec 12 17:25:14.455042 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 12 17:25:14.455127 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Dec 12 17:25:14.455211 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:25:14.455295 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.455374 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Dec 12 17:25:14.455458 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 12 17:25:14.455536 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Dec 12 17:25:14.455613 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:25:14.455698 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.455793 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Dec 12 17:25:14.455873 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 12 17:25:14.455952 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Dec 12 17:25:14.456030 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:25:14.456126 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.456209 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Dec 12 17:25:14.456299 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 12 17:25:14.456378 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Dec 12 17:25:14.456460 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:25:14.456545 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.456623 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Dec 12 17:25:14.456702 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 12 17:25:14.456781 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Dec 12 17:25:14.456858 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:25:14.456946 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.457024 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Dec 12 17:25:14.457103 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 12 17:25:14.457193 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Dec 12 17:25:14.457272 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:25:14.457357 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 12 17:25:14.457448 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Dec 12 17:25:14.457527 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 12 17:25:14.457607 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Dec 12 17:25:14.457685 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:25:14.457774 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Dec 12 17:25:14.457859 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Dec 12 17:25:14.457943 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Dec 12 17:25:14.458023 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Dec 12 17:25:14.458128 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 12 17:25:14.458223 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Dec 12 17:25:14.458313 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Dec 12 17:25:14.458397 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Dec 12 17:25:14.458478 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Dec 12 17:25:14.458568 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:25:14.458650 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Dec 12 17:25:14.458740 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 12 17:25:14.458829 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Dec 12 17:25:14.458912 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Dec 12 17:25:14.459001 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Dec 12 17:25:14.459082 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Dec 12 17:25:14.459175 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Dec 12 17:25:14.459257 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Dec 12 17:25:14.459342 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:25:14.459421 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Dec 12 17:25:14.459505 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Dec 12 17:25:14.459585 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Dec 12 17:25:14.459666 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Dec 12 17:25:14.459764 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 12 17:25:14.459847 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:25:14.459926 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Dec 12 17:25:14.460007 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 12 17:25:14.460085 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Dec 12 17:25:14.460184 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Dec 12 17:25:14.460269 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 12 17:25:14.460348 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:25:14.460426 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Dec 12 17:25:14.460509 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 12 17:25:14.460591 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:25:14.460671 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Dec 12 17:25:14.460753 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 12 17:25:14.460831 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Dec 12 17:25:14.460908 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Dec 12 17:25:14.460992 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 12 17:25:14.461072 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:25:14.461162 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Dec 12 17:25:14.461245 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 12 17:25:14.461323 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:25:14.461405 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Dec 12 17:25:14.461492 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Dec 12 17:25:14.461570 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Dec 12 17:25:14.461649 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Dec 12 17:25:14.461730 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Dec 12 17:25:14.461821 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Dec 12 17:25:14.461900 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Dec 12 17:25:14.461985 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Dec 12 17:25:14.462063 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Dec 12 17:25:14.462741 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Dec 12 17:25:14.462854 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Dec 12 17:25:14.462934 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Dec 12 17:25:14.463014 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Dec 12 17:25:14.463103 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Dec 12 17:25:14.463230 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Dec 12 17:25:14.463311 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Dec 12 17:25:14.463395 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Dec 12 17:25:14.463475 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Dec 12 17:25:14.463557 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Dec 12 17:25:14.463641 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Dec 12 17:25:14.463719 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Dec 12 17:25:14.463821 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Dec 12 17:25:14.463905 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Dec 12 17:25:14.463985 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Dec 12 17:25:14.464067 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Dec 12 17:25:14.464168 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Dec 12 17:25:14.464251 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Dec 12 17:25:14.464330 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Dec 12 17:25:14.464412 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Dec 12 17:25:14.464494 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Dec 12 17:25:14.464574 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Dec 12 17:25:14.464663 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Dec 12 17:25:14.464746 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Dec 12 17:25:14.464825 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Dec 12 17:25:14.464907 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Dec 12 17:25:14.464989 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Dec 12 17:25:14.465068 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Dec 12 17:25:14.465163 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Dec 12 17:25:14.465244 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Dec 12 17:25:14.465322 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Dec 12 17:25:14.465407 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Dec 12 17:25:14.465490 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Dec 12 17:25:14.465569 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Dec 12 17:25:14.465651 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Dec 12 17:25:14.465741 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Dec 12 17:25:14.465822 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Dec 12 17:25:14.465905 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Dec 12 17:25:14.465985 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Dec 12 17:25:14.466068 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Dec 12 17:25:14.466172 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Dec 12 17:25:14.466255 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Dec 12 17:25:14.466346 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Dec 12 17:25:14.466433 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Dec 12 17:25:14.466513 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Dec 12 17:25:14.466591 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Dec 12 17:25:14.466682 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Dec 12 17:25:14.466764 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Dec 12 17:25:14.466843 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Dec 12 17:25:14.466928 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Dec 12 17:25:14.467007 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Dec 12 17:25:14.467084 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Dec 12 17:25:14.467186 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Dec 12 17:25:14.467275 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Dec 12 17:25:14.467363 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Dec 12 17:25:14.467454 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Dec 12 17:25:14.467534 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Dec 12 17:25:14.467613 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Dec 12 17:25:14.467705 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Dec 12 17:25:14.467797 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Dec 12 17:25:14.467879 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Dec 12 17:25:14.467961 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Dec 12 17:25:14.468040 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Dec 12 17:25:14.468132 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Dec 12 17:25:14.468217 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Dec 12 17:25:14.468297 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Dec 12 17:25:14.468379 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Dec 12 17:25:14.468458 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Dec 12 17:25:14.468538 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Dec 12 17:25:14.468625 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Dec 12 17:25:14.468710 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Dec 12 17:25:14.468798 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Dec 12 17:25:14.468883 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Dec 12 17:25:14.468968 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Dec 12 17:25:14.469049 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Dec 12 17:25:14.469141 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Dec 12 17:25:14.469226 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Dec 12 17:25:14.469305 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Dec 12 17:25:14.469388 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Dec 12 17:25:14.469466 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Dec 12 17:25:14.469557 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Dec 12 17:25:14.469646 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Dec 12 17:25:14.469729 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Dec 12 17:25:14.469807 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Dec 12 17:25:14.469886 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Dec 12 17:25:14.469968 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Dec 12 17:25:14.470047 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Dec 12 17:25:14.470144 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Dec 12 17:25:14.470226 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Dec 12 17:25:14.470319 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Dec 12 17:25:14.470405 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Dec 12 17:25:14.470487 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Dec 12 17:25:14.470567 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Dec 12 17:25:14.470645 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Dec 12 17:25:14.470739 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Dec 12 17:25:14.470824 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Dec 12 17:25:14.470905 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Dec 12 17:25:14.470994 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Dec 12 17:25:14.471076 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Dec 12 17:25:14.471174 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Dec 12 17:25:14.471260 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Dec 12 17:25:14.471340 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Dec 12 17:25:14.471428 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Dec 12 17:25:14.471513 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Dec 12 17:25:14.471595 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Dec 12 17:25:14.471673 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Dec 12 17:25:14.471765 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Dec 12 17:25:14.471855 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Dec 12 17:25:14.471936 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Dec 12 17:25:14.472025 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Dec 12 17:25:14.472108 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Dec 12 17:25:14.472206 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Dec 12 17:25:14.472288 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Dec 12 17:25:14.472367 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Dec 12 17:25:14.472458 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Dec 12 17:25:14.472540 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Dec 12 17:25:14.472623 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Dec 12 17:25:14.472710 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Dec 12 17:25:14.472792 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Dec 12 17:25:14.472873 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Dec 12 17:25:14.472963 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Dec 12 17:25:14.473043 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Dec 12 17:25:14.473134 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Dec 12 17:25:14.473219 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Dec 12 17:25:14.473300 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Dec 12 17:25:14.473380 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Dec 12 17:25:14.473460 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Dec 12 17:25:14.473538 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Dec 12 17:25:14.473617 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Dec 12 17:25:14.473697 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Dec 12 17:25:14.473778 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Dec 12 17:25:14.473856 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:25:14.473934 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Dec 12 17:25:14.474012 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:25:14.474091 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Dec 12 17:25:14.474183 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:25:14.474264 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Dec 12 17:25:14.474341 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:25:14.474419 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Dec 12 17:25:14.474499 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:25:14.474581 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Dec 12 17:25:14.474666 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:25:14.474749 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Dec 12 17:25:14.474828 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:25:14.474908 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Dec 12 17:25:14.474987 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:25:14.475066 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Dec 12 17:25:14.475153 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:25:14.475237 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Dec 12 17:25:14.475315 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Dec 12 17:25:14.475393 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Dec 12 17:25:14.475474 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Dec 12 17:25:14.475552 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Dec 12 17:25:14.475638 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Dec 12 17:25:14.475717 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Dec 12 17:25:14.475819 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Dec 12 17:25:14.475901 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Dec 12 17:25:14.475979 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Dec 12 17:25:14.476059 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Dec 12 17:25:14.476153 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Dec 12 17:25:14.476235 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Dec 12 17:25:14.476316 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.476394 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.476472 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Dec 12 17:25:14.476550 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.476631 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.476711 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Dec 12 17:25:14.476790 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.476876 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.476961 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Dec 12 17:25:14.477042 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.477155 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.477242 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Dec 12 17:25:14.477320 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.477399 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.477480 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Dec 12 17:25:14.477559 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.477637 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.477718 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Dec 12 17:25:14.477796 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.477874 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.477969 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Dec 12 17:25:14.478049 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.478146 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.478234 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Dec 12 17:25:14.478314 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.478393 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.478473 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Dec 12 17:25:14.478552 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.478631 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.478712 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Dec 12 17:25:14.478792 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.478872 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.478953 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Dec 12 17:25:14.479032 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.479118 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.479203 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Dec 12 17:25:14.479290 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.479369 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.479448 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Dec 12 17:25:14.479526 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.479605 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.479685 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Dec 12 17:25:14.479781 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.479866 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.479950 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Dec 12 17:25:14.480029 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.480109 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.480199 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Dec 12 17:25:14.480279 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.480360 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.480446 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Dec 12 17:25:14.480526 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.480604 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.480683 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Dec 12 17:25:14.480762 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Dec 12 17:25:14.480844 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Dec 12 17:25:14.480923 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Dec 12 17:25:14.481003 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Dec 12 17:25:14.481082 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Dec 12 17:25:14.481180 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Dec 12 17:25:14.481261 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Dec 12 17:25:14.481341 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Dec 12 17:25:14.481432 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Dec 12 17:25:14.481513 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Dec 12 17:25:14.481593 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Dec 12 17:25:14.481682 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Dec 12 17:25:14.481768 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Dec 12 17:25:14.481850 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Dec 12 17:25:14.481933 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.482013 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.482095 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.482198 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.482280 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.482359 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.482438 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.482516 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.482602 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.482688 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.482768 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.482845 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.482924 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.483002 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.483081 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.483174 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.483261 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.483348 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.483429 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.483509 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.483590 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.483675 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.483770 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.483853 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.483934 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.484013 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.484094 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.484197 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.484281 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.484361 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.484441 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.484520 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.484603 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.484681 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.484760 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Dec 12 17:25:14.484838 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Dec 12 17:25:14.484925 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Dec 12 17:25:14.485006 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Dec 12 17:25:14.485089 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Dec 12 17:25:14.485178 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 12 17:25:14.485259 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Dec 12 17:25:14.485337 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:25:14.485421 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Dec 12 17:25:14.485500 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Dec 12 17:25:14.485581 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Dec 12 17:25:14.485660 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:25:14.485746 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Dec 12 17:25:14.485827 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Dec 12 17:25:14.485908 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Dec 12 17:25:14.485989 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Dec 12 17:25:14.486069 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:25:14.486184 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Dec 12 17:25:14.486267 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Dec 12 17:25:14.486347 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Dec 12 17:25:14.486424 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:25:14.486509 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Dec 12 17:25:14.486606 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Dec 12 17:25:14.486686 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Dec 12 17:25:14.486766 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Dec 12 17:25:14.486854 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:25:14.486943 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Dec 12 17:25:14.487024 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Dec 12 17:25:14.487106 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Dec 12 17:25:14.487203 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Dec 12 17:25:14.487282 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:25:14.487361 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Dec 12 17:25:14.487439 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Dec 12 17:25:14.487517 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:25:14.487599 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Dec 12 17:25:14.487677 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Dec 12 17:25:14.487770 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:25:14.487861 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Dec 12 17:25:14.487941 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Dec 12 17:25:14.488019 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:25:14.488102 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Dec 12 17:25:14.488191 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Dec 12 17:25:14.488271 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Dec 12 17:25:14.488350 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Dec 12 17:25:14.488428 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Dec 12 17:25:14.488508 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Dec 12 17:25:14.488588 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Dec 12 17:25:14.488666 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Dec 12 17:25:14.488745 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Dec 12 17:25:14.488824 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Dec 12 17:25:14.488903 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Dec 12 17:25:14.488982 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Dec 12 17:25:14.489062 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Dec 12 17:25:14.489153 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Dec 12 17:25:14.489235 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 12 17:25:14.489314 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Dec 12 17:25:14.489396 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Dec 12 17:25:14.489475 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 12 17:25:14.489555 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Dec 12 17:25:14.489634 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Dec 12 17:25:14.489713 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 12 17:25:14.489794 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Dec 12 17:25:14.489873 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Dec 12 17:25:14.489951 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Dec 12 17:25:14.490030 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Dec 12 17:25:14.490108 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Dec 12 17:25:14.490198 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Dec 12 17:25:14.490281 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Dec 12 17:25:14.490360 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Dec 12 17:25:14.490438 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Dec 12 17:25:14.490516 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Dec 12 17:25:14.490595 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Dec 12 17:25:14.490673 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Dec 12 17:25:14.490752 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Dec 12 17:25:14.490831 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Dec 12 17:25:14.490911 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Dec 12 17:25:14.490990 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Dec 12 17:25:14.491068 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Dec 12 17:25:14.491156 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Dec 12 17:25:14.491238 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Dec 12 17:25:14.491320 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Dec 12 17:25:14.491399 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Dec 12 17:25:14.491476 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 12 17:25:14.491555 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Dec 12 17:25:14.491646 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Dec 12 17:25:14.491750 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Dec 12 17:25:14.491835 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 12 17:25:14.491918 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Dec 12 17:25:14.492005 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Dec 12 17:25:14.492087 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Dec 12 17:25:14.492185 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 12 17:25:14.492266 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Dec 12 17:25:14.492347 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Dec 12 17:25:14.492426 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Dec 12 17:25:14.492508 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Dec 12 17:25:14.492588 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Dec 12 17:25:14.492674 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Dec 12 17:25:14.492756 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Dec 12 17:25:14.492840 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Dec 12 17:25:14.492922 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Dec 12 17:25:14.493003 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Dec 12 17:25:14.493082 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Dec 12 17:25:14.493174 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Dec 12 17:25:14.493257 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Dec 12 17:25:14.493337 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Dec 12 17:25:14.493416 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Dec 12 17:25:14.493506 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Dec 12 17:25:14.493589 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Dec 12 17:25:14.493677 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Dec 12 17:25:14.493756 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Dec 12 17:25:14.493836 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Dec 12 17:25:14.493917 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Dec 12 17:25:14.493995 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Dec 12 17:25:14.494073 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Dec 12 17:25:14.494168 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 12 17:25:14.494250 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Dec 12 17:25:14.494329 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Dec 12 17:25:14.494408 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Dec 12 17:25:14.494485 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 12 17:25:14.494565 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Dec 12 17:25:14.494646 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Dec 12 17:25:14.494731 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Dec 12 17:25:14.494810 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 12 17:25:14.494895 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Dec 12 17:25:14.494976 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Dec 12 17:25:14.495056 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Dec 12 17:25:14.495149 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Dec 12 17:25:14.495233 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 12 17:25:14.495306 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 12 17:25:14.495387 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 12 17:25:14.495470 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Dec 12 17:25:14.495547 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Dec 12 17:25:14.495629 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Dec 12 17:25:14.495705 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Dec 12 17:25:14.495802 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Dec 12 17:25:14.495884 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Dec 12 17:25:14.495978 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Dec 12 17:25:14.496054 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Dec 12 17:25:14.496154 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Dec 12 17:25:14.496230 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Dec 12 17:25:14.496311 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Dec 12 17:25:14.496385 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Dec 12 17:25:14.496464 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Dec 12 17:25:14.496541 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Dec 12 17:25:14.496620 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Dec 12 17:25:14.496694 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Dec 12 17:25:14.496773 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Dec 12 17:25:14.496848 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Dec 12 17:25:14.496931 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Dec 12 17:25:14.497005 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Dec 12 17:25:14.497086 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Dec 12 17:25:14.497176 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Dec 12 17:25:14.497265 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Dec 12 17:25:14.497342 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Dec 12 17:25:14.497427 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Dec 12 17:25:14.497501 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Dec 12 17:25:14.497597 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Dec 12 17:25:14.497672 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Dec 12 17:25:14.497755 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Dec 12 17:25:14.497828 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Dec 12 17:25:14.497910 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Dec 12 17:25:14.497984 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Dec 12 17:25:14.498069 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Dec 12 17:25:14.498161 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Dec 12 17:25:14.498250 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Dec 12 17:25:14.498334 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Dec 12 17:25:14.498416 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Dec 12 17:25:14.498498 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Dec 12 17:25:14.498577 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Dec 12 17:25:14.498663 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Dec 12 17:25:14.498739 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Dec 12 17:25:14.498812 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Dec 12 17:25:14.498896 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Dec 12 17:25:14.498976 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Dec 12 17:25:14.499051 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Dec 12 17:25:14.499160 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Dec 12 17:25:14.499238 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Dec 12 17:25:14.499311 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Dec 12 17:25:14.499390 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Dec 12 17:25:14.499467 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Dec 12 17:25:14.499548 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Dec 12 17:25:14.499630 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Dec 12 17:25:14.499703 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Dec 12 17:25:14.499792 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Dec 12 17:25:14.499881 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Dec 12 17:25:14.499968 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Dec 12 17:25:14.500042 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Dec 12 17:25:14.500141 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Dec 12 17:25:14.500220 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Dec 12 17:25:14.500301 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Dec 12 17:25:14.500382 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Dec 12 17:25:14.500464 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Dec 12 17:25:14.500542 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Dec 12 17:25:14.500623 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Dec 12 17:25:14.500697 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Dec 12 17:25:14.500772 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Dec 12 17:25:14.500860 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Dec 12 17:25:14.500934 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Dec 12 17:25:14.501018 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Dec 12 17:25:14.501104 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Dec 12 17:25:14.501197 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Dec 12 17:25:14.501274 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Dec 12 17:25:14.501357 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Dec 12 17:25:14.501432 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Dec 12 17:25:14.501506 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Dec 12 17:25:14.501586 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Dec 12 17:25:14.501667 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Dec 12 17:25:14.501750 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Dec 12 17:25:14.501834 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Dec 12 17:25:14.501908 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Dec 12 17:25:14.501985 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Dec 12 17:25:14.501995 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 12 17:25:14.502004 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 12 17:25:14.502012 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 12 17:25:14.502022 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 12 17:25:14.502031 kernel: iommu: Default domain type: Translated Dec 12 17:25:14.502038 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 12 17:25:14.502046 kernel: efivars: Registered efivars operations Dec 12 17:25:14.502054 kernel: vgaarb: loaded Dec 12 17:25:14.502062 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 12 17:25:14.502070 kernel: VFS: Disk quotas dquot_6.6.0 Dec 12 17:25:14.502079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 12 17:25:14.502087 kernel: pnp: PnP ACPI init Dec 12 17:25:14.502192 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 12 17:25:14.502204 kernel: pnp: PnP ACPI: found 1 devices Dec 12 17:25:14.502213 kernel: NET: Registered PF_INET protocol family Dec 12 17:25:14.502221 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 12 17:25:14.502229 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Dec 12 17:25:14.502240 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 12 17:25:14.502248 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 12 17:25:14.502256 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 12 17:25:14.502264 kernel: TCP: Hash tables configured (established 131072 bind 65536) Dec 12 17:25:14.502272 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 17:25:14.502280 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Dec 12 17:25:14.502288 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 12 17:25:14.502378 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Dec 12 17:25:14.502389 kernel: PCI: CLS 0 bytes, default 64 Dec 12 17:25:14.502397 kernel: kvm [1]: HYP mode not available Dec 12 17:25:14.502405 kernel: Initialise system trusted keyrings Dec 12 17:25:14.502413 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Dec 12 17:25:14.502421 kernel: Key type asymmetric registered Dec 12 17:25:14.502428 kernel: Asymmetric key parser 'x509' registered Dec 12 17:25:14.502438 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 12 17:25:14.502446 kernel: io scheduler mq-deadline registered Dec 12 17:25:14.502454 kernel: io scheduler kyber registered Dec 12 17:25:14.502462 kernel: io scheduler bfq registered Dec 12 17:25:14.502471 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 12 17:25:14.502551 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Dec 12 17:25:14.502638 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Dec 12 17:25:14.502728 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.502809 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Dec 12 17:25:14.502889 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Dec 12 17:25:14.502968 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.503052 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Dec 12 17:25:14.503160 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Dec 12 17:25:14.503247 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.503328 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Dec 12 17:25:14.503407 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Dec 12 17:25:14.503485 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.503565 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Dec 12 17:25:14.503643 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Dec 12 17:25:14.503721 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.503819 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Dec 12 17:25:14.503900 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Dec 12 17:25:14.503979 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.504059 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Dec 12 17:25:14.504154 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Dec 12 17:25:14.504249 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.504334 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Dec 12 17:25:14.504413 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Dec 12 17:25:14.504491 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.504502 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 12 17:25:14.504579 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Dec 12 17:25:14.504657 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Dec 12 17:25:14.504738 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.504817 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Dec 12 17:25:14.504897 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Dec 12 17:25:14.504983 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.505065 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Dec 12 17:25:14.505154 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Dec 12 17:25:14.505234 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.505317 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Dec 12 17:25:14.505401 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Dec 12 17:25:14.505480 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.505559 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Dec 12 17:25:14.505647 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Dec 12 17:25:14.505726 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.505809 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Dec 12 17:25:14.505888 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Dec 12 17:25:14.505985 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.506067 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Dec 12 17:25:14.506163 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Dec 12 17:25:14.506245 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.506331 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Dec 12 17:25:14.506412 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Dec 12 17:25:14.506492 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.506503 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Dec 12 17:25:14.506580 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Dec 12 17:25:14.506665 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Dec 12 17:25:14.506745 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.506827 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Dec 12 17:25:14.506906 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Dec 12 17:25:14.506984 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.507063 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Dec 12 17:25:14.507149 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Dec 12 17:25:14.507229 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.507311 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Dec 12 17:25:14.507397 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Dec 12 17:25:14.507476 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.507556 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Dec 12 17:25:14.507641 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Dec 12 17:25:14.507720 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.507819 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Dec 12 17:25:14.507900 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Dec 12 17:25:14.507978 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.508058 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Dec 12 17:25:14.508153 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Dec 12 17:25:14.508234 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.508326 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Dec 12 17:25:14.508410 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Dec 12 17:25:14.508490 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.508501 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 12 17:25:14.508578 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Dec 12 17:25:14.508658 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Dec 12 17:25:14.508738 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.508821 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Dec 12 17:25:14.508901 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Dec 12 17:25:14.508980 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.509061 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Dec 12 17:25:14.509164 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Dec 12 17:25:14.509249 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.509334 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Dec 12 17:25:14.509415 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Dec 12 17:25:14.509494 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.509574 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Dec 12 17:25:14.509659 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Dec 12 17:25:14.509746 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.509830 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Dec 12 17:25:14.509923 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Dec 12 17:25:14.510006 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.510087 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Dec 12 17:25:14.510181 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Dec 12 17:25:14.510261 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.510341 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Dec 12 17:25:14.510423 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Dec 12 17:25:14.510502 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.510582 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Dec 12 17:25:14.510661 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Dec 12 17:25:14.510740 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 12 17:25:14.510750 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 12 17:25:14.510761 kernel: ACPI: button: Power Button [PWRB] Dec 12 17:25:14.510843 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Dec 12 17:25:14.510930 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Dec 12 17:25:14.510941 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 12 17:25:14.510949 kernel: thunder_xcv, ver 1.0 Dec 12 17:25:14.510957 kernel: thunder_bgx, ver 1.0 Dec 12 17:25:14.510965 kernel: nicpf, ver 1.0 Dec 12 17:25:14.510975 kernel: nicvf, ver 1.0 Dec 12 17:25:14.511076 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 12 17:25:14.511166 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-12-12T17:25:13 UTC (1765560313) Dec 12 17:25:14.511177 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 12 17:25:14.511186 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Dec 12 17:25:14.511194 kernel: watchdog: NMI not fully supported Dec 12 17:25:14.511205 kernel: watchdog: Hard watchdog permanently disabled Dec 12 17:25:14.511213 kernel: NET: Registered PF_INET6 protocol family Dec 12 17:25:14.511220 kernel: Segment Routing with IPv6 Dec 12 17:25:14.511228 kernel: In-situ OAM (IOAM) with IPv6 Dec 12 17:25:14.511236 kernel: NET: Registered PF_PACKET protocol family Dec 12 17:25:14.511244 kernel: Key type dns_resolver registered Dec 12 17:25:14.511252 kernel: registered taskstats version 1 Dec 12 17:25:14.511260 kernel: Loading compiled-in X.509 certificates Dec 12 17:25:14.511270 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: a5d527f63342895c4af575176d4ae6e640b6d0e9' Dec 12 17:25:14.511278 kernel: Demotion targets for Node 0: null Dec 12 17:25:14.511285 kernel: Key type .fscrypt registered Dec 12 17:25:14.511293 kernel: Key type fscrypt-provisioning registered Dec 12 17:25:14.511301 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 12 17:25:14.511309 kernel: ima: Allocated hash algorithm: sha1 Dec 12 17:25:14.511317 kernel: ima: No architecture policies found Dec 12 17:25:14.511327 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 12 17:25:14.511334 kernel: clk: Disabling unused clocks Dec 12 17:25:14.511342 kernel: PM: genpd: Disabling unused power domains Dec 12 17:25:14.511351 kernel: Freeing unused kernel memory: 12416K Dec 12 17:25:14.511359 kernel: Run /init as init process Dec 12 17:25:14.511366 kernel: with arguments: Dec 12 17:25:14.511374 kernel: /init Dec 12 17:25:14.511383 kernel: with environment: Dec 12 17:25:14.511391 kernel: HOME=/ Dec 12 17:25:14.511399 kernel: TERM=linux Dec 12 17:25:14.511406 kernel: ACPI: bus type USB registered Dec 12 17:25:14.511414 kernel: usbcore: registered new interface driver usbfs Dec 12 17:25:14.511422 kernel: usbcore: registered new interface driver hub Dec 12 17:25:14.511430 kernel: usbcore: registered new device driver usb Dec 12 17:25:14.511515 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:25:14.511599 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Dec 12 17:25:14.511680 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 12 17:25:14.511775 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Dec 12 17:25:14.511858 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Dec 12 17:25:14.511939 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Dec 12 17:25:14.512060 kernel: hub 1-0:1.0: USB hub found Dec 12 17:25:14.512190 kernel: hub 1-0:1.0: 4 ports detected Dec 12 17:25:14.512296 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 12 17:25:14.512394 kernel: hub 2-0:1.0: USB hub found Dec 12 17:25:14.512484 kernel: hub 2-0:1.0: 4 ports detected Dec 12 17:25:14.512576 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 12 17:25:14.512661 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Dec 12 17:25:14.512672 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 12 17:25:14.512680 kernel: GPT:25804799 != 104857599 Dec 12 17:25:14.512689 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 12 17:25:14.512697 kernel: GPT:25804799 != 104857599 Dec 12 17:25:14.512705 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 12 17:25:14.512714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 12 17:25:14.512722 kernel: SCSI subsystem initialized Dec 12 17:25:14.512731 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 12 17:25:14.512739 kernel: device-mapper: uevent: version 1.0.3 Dec 12 17:25:14.512748 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 12 17:25:14.512756 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Dec 12 17:25:14.512764 kernel: raid6: neonx8 gen() 15742 MB/s Dec 12 17:25:14.512774 kernel: raid6: neonx4 gen() 15600 MB/s Dec 12 17:25:14.512782 kernel: raid6: neonx2 gen() 12476 MB/s Dec 12 17:25:14.512790 kernel: raid6: neonx1 gen() 10549 MB/s Dec 12 17:25:14.512798 kernel: raid6: int64x8 gen() 6851 MB/s Dec 12 17:25:14.512806 kernel: raid6: int64x4 gen() 7365 MB/s Dec 12 17:25:14.512815 kernel: raid6: int64x2 gen() 6108 MB/s Dec 12 17:25:14.512823 kernel: raid6: int64x1 gen() 5041 MB/s Dec 12 17:25:14.512832 kernel: raid6: using algorithm neonx8 gen() 15742 MB/s Dec 12 17:25:14.512841 kernel: raid6: .... xor() 12082 MB/s, rmw enabled Dec 12 17:25:14.512849 kernel: raid6: using neon recovery algorithm Dec 12 17:25:14.512859 kernel: xor: measuring software checksum speed Dec 12 17:25:14.512868 kernel: 8regs : 21624 MB/sec Dec 12 17:25:14.512877 kernel: 32regs : 20812 MB/sec Dec 12 17:25:14.512885 kernel: arm64_neon : 26080 MB/sec Dec 12 17:25:14.512894 kernel: xor: using function: arm64_neon (26080 MB/sec) Dec 12 17:25:14.513008 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 12 17:25:14.513021 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 12 17:25:14.513031 kernel: BTRFS: device fsid d09b8b5a-fb5f-4a17-94ef-0a452535b2bc devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (274) Dec 12 17:25:14.513039 kernel: BTRFS info (device dm-0): first mount of filesystem d09b8b5a-fb5f-4a17-94ef-0a452535b2bc Dec 12 17:25:14.513048 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:25:14.513059 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 12 17:25:14.513068 kernel: BTRFS info (device dm-0): enabling free space tree Dec 12 17:25:14.513076 kernel: loop: module loaded Dec 12 17:25:14.513084 kernel: loop0: detected capacity change from 0 to 91480 Dec 12 17:25:14.513092 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 12 17:25:14.513205 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Dec 12 17:25:14.513222 systemd[1]: Successfully made /usr/ read-only. Dec 12 17:25:14.513234 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:25:14.513244 systemd[1]: Detected virtualization kvm. Dec 12 17:25:14.513253 systemd[1]: Detected architecture arm64. Dec 12 17:25:14.513261 systemd[1]: Running in initrd. Dec 12 17:25:14.513270 systemd[1]: No hostname configured, using default hostname. Dec 12 17:25:14.513280 systemd[1]: Hostname set to . Dec 12 17:25:14.513289 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:25:14.513298 systemd[1]: Queued start job for default target initrd.target. Dec 12 17:25:14.513307 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:25:14.513316 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:25:14.513325 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:25:14.513336 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 12 17:25:14.513345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:25:14.513354 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 12 17:25:14.513363 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 12 17:25:14.513372 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:25:14.513381 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:25:14.513391 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:25:14.513400 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:25:14.513409 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:25:14.513418 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:25:14.513428 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:25:14.513437 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:25:14.513446 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:25:14.513456 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:25:14.513465 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 12 17:25:14.513474 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 12 17:25:14.513483 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:25:14.513492 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:25:14.513501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:25:14.513510 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:25:14.513520 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 12 17:25:14.513529 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 12 17:25:14.513538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:25:14.513547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 12 17:25:14.513556 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 12 17:25:14.513565 systemd[1]: Starting systemd-fsck-usr.service... Dec 12 17:25:14.513575 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:25:14.513584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:25:14.513594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:14.513602 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 12 17:25:14.513613 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:25:14.513622 systemd[1]: Finished systemd-fsck-usr.service. Dec 12 17:25:14.513631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 12 17:25:14.513662 systemd-journald[417]: Collecting audit messages is enabled. Dec 12 17:25:14.513684 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 12 17:25:14.513693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 12 17:25:14.513702 kernel: Bridge firewalling registered Dec 12 17:25:14.513711 kernel: audit: type=1130 audit(1765560314.453:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.513720 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:25:14.513730 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:25:14.513739 kernel: audit: type=1130 audit(1765560314.474:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.513748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:14.513757 kernel: audit: type=1130 audit(1765560314.478:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.513766 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 12 17:25:14.513775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:25:14.513784 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:25:14.513794 kernel: audit: type=1130 audit(1765560314.495:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.513803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:25:14.513812 kernel: audit: type=1130 audit(1765560314.505:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.513820 kernel: audit: type=1334 audit(1765560314.508:7): prog-id=6 op=LOAD Dec 12 17:25:14.513828 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:25:14.513837 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:25:14.513849 systemd-journald[417]: Journal started Dec 12 17:25:14.513868 systemd-journald[417]: Runtime Journal (/run/log/journal/e891f032f9394d78972bcab34dee018a) is 8M, max 319.5M, 311.5M free. Dec 12 17:25:14.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.508000 audit: BPF prog-id=6 op=LOAD Dec 12 17:25:14.454449 systemd-modules-load[418]: Inserted module 'br_netfilter' Dec 12 17:25:14.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.519141 kernel: audit: type=1130 audit(1765560314.516:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.519166 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:25:14.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.523186 kernel: audit: type=1130 audit(1765560314.519:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.523917 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 12 17:25:14.525571 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:25:14.537485 systemd-tmpfiles[458]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 12 17:25:14.541045 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:25:14.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.545600 dracut-cmdline[456]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=f511955c7ec069359d088640c1194932d6d915b5bb2829e8afbb591f10cd0849 Dec 12 17:25:14.550371 kernel: audit: type=1130 audit(1765560314.542:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.571493 systemd-resolved[444]: Positive Trust Anchors: Dec 12 17:25:14.571512 systemd-resolved[444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:25:14.571515 systemd-resolved[444]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:25:14.571550 systemd-resolved[444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:25:14.595690 systemd-resolved[444]: Defaulting to hostname 'linux'. Dec 12 17:25:14.596760 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:25:14.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.598644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:25:14.629149 kernel: Loading iSCSI transport class v2.0-870. Dec 12 17:25:14.642343 kernel: iscsi: registered transport (tcp) Dec 12 17:25:14.656470 kernel: iscsi: registered transport (qla4xxx) Dec 12 17:25:14.656510 kernel: QLogic iSCSI HBA Driver Dec 12 17:25:14.678678 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:25:14.696709 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:25:14.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.698218 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:25:14.744384 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 12 17:25:14.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.746633 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 12 17:25:14.748161 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 12 17:25:14.777328 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:25:14.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.779000 audit: BPF prog-id=7 op=LOAD Dec 12 17:25:14.779000 audit: BPF prog-id=8 op=LOAD Dec 12 17:25:14.779776 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:25:14.809842 systemd-udevd[697]: Using default interface naming scheme 'v257'. Dec 12 17:25:14.817770 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:25:14.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.820848 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 12 17:25:14.844525 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:25:14.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.846000 audit: BPF prog-id=9 op=LOAD Dec 12 17:25:14.847506 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:25:14.850995 dracut-pre-trigger[768]: rd.md=0: removing MD RAID activation Dec 12 17:25:14.875509 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:25:14.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.877854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:25:14.888883 systemd-networkd[808]: lo: Link UP Dec 12 17:25:14.888890 systemd-networkd[808]: lo: Gained carrier Dec 12 17:25:14.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.889376 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:25:14.890858 systemd[1]: Reached target network.target - Network. Dec 12 17:25:14.969212 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:25:14.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:14.972919 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 12 17:25:15.039753 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 12 17:25:15.048350 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 12 17:25:15.067166 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Dec 12 17:25:15.066886 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:25:15.074131 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Dec 12 17:25:15.074157 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 12 17:25:15.077279 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 12 17:25:15.089136 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Dec 12 17:25:15.102451 disk-uuid[876]: Primary Header is updated. Dec 12 17:25:15.102451 disk-uuid[876]: Secondary Entries is updated. Dec 12 17:25:15.102451 disk-uuid[876]: Secondary Header is updated. Dec 12 17:25:15.109748 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:25:15.109875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:15.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:15.115261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:15.118655 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:15.128708 systemd-networkd[808]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:15.128723 systemd-networkd[808]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:25:15.129161 systemd-networkd[808]: eth0: Link UP Dec 12 17:25:15.129994 systemd-networkd[808]: eth0: Gained carrier Dec 12 17:25:15.130006 systemd-networkd[808]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:15.146138 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Dec 12 17:25:15.148261 kernel: usbcore: registered new interface driver usbhid Dec 12 17:25:15.148300 kernel: usbhid: USB HID core driver Dec 12 17:25:15.161987 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:15.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:15.197226 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 12 17:25:15.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:15.198204 systemd-networkd[808]: eth0: DHCPv4 address 10.0.11.71/25, gateway 10.0.11.1 acquired from 10.0.11.1 Dec 12 17:25:15.198412 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:25:15.200158 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:25:15.202105 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:25:15.204967 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 12 17:25:15.222788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:25:15.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.154881 disk-uuid[879]: Warning: The kernel is still using the old partition table. Dec 12 17:25:16.154881 disk-uuid[879]: The new table will be used at the next reboot or after you Dec 12 17:25:16.154881 disk-uuid[879]: run partprobe(8) or kpartx(8) Dec 12 17:25:16.154881 disk-uuid[879]: The operation has completed successfully. Dec 12 17:25:16.163468 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 12 17:25:16.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.163572 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 12 17:25:16.167251 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 12 17:25:16.200151 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (912) Dec 12 17:25:16.203139 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:25:16.203156 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:25:16.208138 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:25:16.208160 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:25:16.215140 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:25:16.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.214165 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 12 17:25:16.218240 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 12 17:25:16.376275 ignition[931]: Ignition 2.22.0 Dec 12 17:25:16.376292 ignition[931]: Stage: fetch-offline Dec 12 17:25:16.376335 ignition[931]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:16.376346 ignition[931]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:16.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.378826 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:25:16.376517 ignition[931]: parsed url from cmdline: "" Dec 12 17:25:16.381453 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 12 17:25:16.376520 ignition[931]: no config URL provided Dec 12 17:25:16.376525 ignition[931]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:25:16.376533 ignition[931]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:25:16.376537 ignition[931]: failed to fetch config: resource requires networking Dec 12 17:25:16.376903 ignition[931]: Ignition finished successfully Dec 12 17:25:16.407378 ignition[942]: Ignition 2.22.0 Dec 12 17:25:16.407391 ignition[942]: Stage: fetch Dec 12 17:25:16.407643 ignition[942]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:16.407651 ignition[942]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:16.407749 ignition[942]: parsed url from cmdline: "" Dec 12 17:25:16.407753 ignition[942]: no config URL provided Dec 12 17:25:16.407757 ignition[942]: reading system config file "/usr/lib/ignition/user.ign" Dec 12 17:25:16.407763 ignition[942]: no config at "/usr/lib/ignition/user.ign" Dec 12 17:25:16.408210 ignition[942]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 12 17:25:16.408313 ignition[942]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 12 17:25:16.408431 ignition[942]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 12 17:25:16.680453 ignition[942]: GET result: OK Dec 12 17:25:16.680726 ignition[942]: parsing config with SHA512: 10ecabc9044e9c56c20f6274a20d422085a2b132e0b2e447418bfa64b106ff3660d41a7b20de3531e2af917805e15ecc8e5062691cc0c18986fe0b44092455f1 Dec 12 17:25:16.685271 unknown[942]: fetched base config from "system" Dec 12 17:25:16.685280 unknown[942]: fetched base config from "system" Dec 12 17:25:16.685605 ignition[942]: fetch: fetch complete Dec 12 17:25:16.685286 unknown[942]: fetched user config from "openstack" Dec 12 17:25:16.685609 ignition[942]: fetch: fetch passed Dec 12 17:25:16.689821 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 12 17:25:16.694311 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 12 17:25:16.694345 kernel: audit: type=1130 audit(1765560316.690:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.685648 ignition[942]: Ignition finished successfully Dec 12 17:25:16.691796 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 12 17:25:16.724136 ignition[950]: Ignition 2.22.0 Dec 12 17:25:16.724150 ignition[950]: Stage: kargs Dec 12 17:25:16.724285 ignition[950]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:16.724294 ignition[950]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:16.727685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 12 17:25:16.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.725017 ignition[950]: kargs: kargs passed Dec 12 17:25:16.733584 kernel: audit: type=1130 audit(1765560316.728:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.730022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 12 17:25:16.725062 ignition[950]: Ignition finished successfully Dec 12 17:25:16.757345 ignition[959]: Ignition 2.22.0 Dec 12 17:25:16.757361 ignition[959]: Stage: disks Dec 12 17:25:16.757501 ignition[959]: no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:16.757509 ignition[959]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:16.758267 ignition[959]: disks: disks passed Dec 12 17:25:16.758309 ignition[959]: Ignition finished successfully Dec 12 17:25:16.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.761800 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 12 17:25:16.767079 kernel: audit: type=1130 audit(1765560316.762:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:16.762917 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 12 17:25:16.766381 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 12 17:25:16.768172 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:25:16.769849 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:25:16.771647 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:25:16.773974 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 12 17:25:17.013717 systemd-fsck[969]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Dec 12 17:25:17.017541 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 12 17:25:17.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.019671 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 12 17:25:17.024066 kernel: audit: type=1130 audit(1765560317.018:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.043320 systemd-networkd[808]: eth0: Gained IPv6LL Dec 12 17:25:17.111176 kernel: EXT4-fs (vda9): mounted filesystem fa93fc03-2e23-46f9-9013-1e396e3304a8 r/w with ordered data mode. Quota mode: none. Dec 12 17:25:17.111752 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 12 17:25:17.113012 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 12 17:25:17.116411 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:25:17.118277 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 12 17:25:17.119185 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 12 17:25:17.119765 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 12 17:25:17.121721 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 12 17:25:17.121761 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:25:17.133232 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 12 17:25:17.136474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 12 17:25:17.147160 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (977) Dec 12 17:25:17.150856 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:25:17.150905 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:25:17.156249 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:25:17.156311 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:25:17.158770 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:25:17.197282 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:17.212250 initrd-setup-root[1008]: cut: /sysroot/etc/passwd: No such file or directory Dec 12 17:25:17.217991 initrd-setup-root[1015]: cut: /sysroot/etc/group: No such file or directory Dec 12 17:25:17.222844 initrd-setup-root[1022]: cut: /sysroot/etc/shadow: No such file or directory Dec 12 17:25:17.227040 initrd-setup-root[1029]: cut: /sysroot/etc/gshadow: No such file or directory Dec 12 17:25:17.322440 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 12 17:25:17.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.325435 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 12 17:25:17.327968 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 12 17:25:17.330101 kernel: audit: type=1130 audit(1765560317.323:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.343529 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 12 17:25:17.346126 kernel: BTRFS info (device vda6): last unmount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:25:17.361305 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 12 17:25:17.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.366157 kernel: audit: type=1130 audit(1765560317.362:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.376842 ignition[1098]: INFO : Ignition 2.22.0 Dec 12 17:25:17.376842 ignition[1098]: INFO : Stage: mount Dec 12 17:25:17.378419 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:17.378419 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:17.378419 ignition[1098]: INFO : mount: mount passed Dec 12 17:25:17.378419 ignition[1098]: INFO : Ignition finished successfully Dec 12 17:25:17.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:17.381980 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 12 17:25:17.386156 kernel: audit: type=1130 audit(1765560317.382:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:18.244142 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:20.253137 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:24.262140 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:24.271000 coreos-metadata[979]: Dec 12 17:25:24.270 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:25:24.289299 coreos-metadata[979]: Dec 12 17:25:24.289 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 17:25:25.976610 coreos-metadata[979]: Dec 12 17:25:25.976 INFO Fetch successful Dec 12 17:25:25.977803 coreos-metadata[979]: Dec 12 17:25:25.977 INFO wrote hostname ci-4515-1-0-e-d121438740 to /sysroot/etc/hostname Dec 12 17:25:25.979143 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 12 17:25:25.980184 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 12 17:25:25.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:25.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:25.982323 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 12 17:25:25.988481 kernel: audit: type=1130 audit(1765560325.981:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:25.988505 kernel: audit: type=1131 audit(1765560325.981:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:25.999022 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 12 17:25:26.023051 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1115) Dec 12 17:25:26.023092 kernel: BTRFS info (device vda6): first mount of filesystem 006ba4f4-0786-4a38-abb9-900c84a8b97a Dec 12 17:25:26.023104 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 12 17:25:26.030285 kernel: BTRFS info (device vda6): turning on async discard Dec 12 17:25:26.030348 kernel: BTRFS info (device vda6): enabling free space tree Dec 12 17:25:26.031692 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 12 17:25:26.064858 ignition[1133]: INFO : Ignition 2.22.0 Dec 12 17:25:26.064858 ignition[1133]: INFO : Stage: files Dec 12 17:25:26.066541 ignition[1133]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:26.066541 ignition[1133]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:26.066541 ignition[1133]: DEBUG : files: compiled without relabeling support, skipping Dec 12 17:25:26.069559 ignition[1133]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 12 17:25:26.069559 ignition[1133]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 12 17:25:26.072309 ignition[1133]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 12 17:25:26.073613 ignition[1133]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 12 17:25:26.073613 ignition[1133]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 12 17:25:26.072819 unknown[1133]: wrote ssh authorized keys file for user: core Dec 12 17:25:26.076987 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:25:26.076987 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Dec 12 17:25:26.265107 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 12 17:25:26.463695 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Dec 12 17:25:26.463695 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:25:26.467740 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Dec 12 17:25:26.579614 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 12 17:25:27.531805 ignition[1133]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Dec 12 17:25:27.531805 ignition[1133]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 12 17:25:27.535788 ignition[1133]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 12 17:25:27.537544 ignition[1133]: INFO : files: files passed Dec 12 17:25:27.537544 ignition[1133]: INFO : Ignition finished successfully Dec 12 17:25:27.551674 kernel: audit: type=1130 audit(1765560327.539:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.538610 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 12 17:25:27.541063 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 12 17:25:27.544943 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 12 17:25:27.559893 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 12 17:25:27.560001 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 12 17:25:27.568348 kernel: audit: type=1130 audit(1765560327.562:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.568377 kernel: audit: type=1131 audit(1765560327.562:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.568459 initrd-setup-root-after-ignition[1166]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:25:27.568459 initrd-setup-root-after-ignition[1166]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:25:27.575254 kernel: audit: type=1130 audit(1765560327.572:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.571639 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:25:27.577218 initrd-setup-root-after-ignition[1170]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 12 17:25:27.573375 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 12 17:25:27.577077 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 12 17:25:27.632035 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 12 17:25:27.632212 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 12 17:25:27.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.634257 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 12 17:25:27.640951 kernel: audit: type=1130 audit(1765560327.633:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.640979 kernel: audit: type=1131 audit(1765560327.633:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.640104 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 12 17:25:27.642034 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 12 17:25:27.643047 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 12 17:25:27.674692 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:25:27.679151 kernel: audit: type=1130 audit(1765560327.675:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.678478 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 12 17:25:27.703695 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Dec 12 17:25:27.703835 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:25:27.705877 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:25:27.707719 systemd[1]: Stopped target timers.target - Timer Units. Dec 12 17:25:27.709298 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 12 17:25:27.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.709434 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 12 17:25:27.714879 kernel: audit: type=1131 audit(1765560327.710:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.714008 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 12 17:25:27.715830 systemd[1]: Stopped target basic.target - Basic System. Dec 12 17:25:27.717279 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 12 17:25:27.718801 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 12 17:25:27.720583 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 12 17:25:27.722281 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 12 17:25:27.724034 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 12 17:25:27.725795 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 12 17:25:27.727538 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 12 17:25:27.729272 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 12 17:25:27.730836 systemd[1]: Stopped target swap.target - Swaps. Dec 12 17:25:27.732214 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 12 17:25:27.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.732352 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 12 17:25:27.734435 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:25:27.736158 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:25:27.737964 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 12 17:25:27.742262 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:25:27.744789 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 12 17:25:27.744925 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 12 17:25:27.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.747449 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 12 17:25:27.747583 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 12 17:25:27.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.749579 systemd[1]: ignition-files.service: Deactivated successfully. Dec 12 17:25:27.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.749678 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 12 17:25:27.752172 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 12 17:25:27.753555 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 12 17:25:27.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.753683 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:25:27.756160 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 12 17:25:27.757831 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 12 17:25:27.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.757967 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:25:27.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.759691 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 12 17:25:27.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.759815 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:25:27.761369 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 12 17:25:27.761483 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 12 17:25:27.767501 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 12 17:25:27.768482 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 12 17:25:27.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.779210 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 12 17:25:27.784945 ignition[1190]: INFO : Ignition 2.22.0 Dec 12 17:25:27.784945 ignition[1190]: INFO : Stage: umount Dec 12 17:25:27.786450 ignition[1190]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 12 17:25:27.786450 ignition[1190]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 12 17:25:27.786450 ignition[1190]: INFO : umount: umount passed Dec 12 17:25:27.786450 ignition[1190]: INFO : Ignition finished successfully Dec 12 17:25:27.786076 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 12 17:25:27.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.789261 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 12 17:25:27.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.790853 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 12 17:25:27.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.790980 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 12 17:25:27.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.792638 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 12 17:25:27.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.792731 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 12 17:25:27.793772 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 12 17:25:27.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.793819 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 12 17:25:27.795279 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 12 17:25:27.795331 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 12 17:25:27.796846 systemd[1]: Stopped target network.target - Network. Dec 12 17:25:27.798268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 12 17:25:27.798328 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 12 17:25:27.800028 systemd[1]: Stopped target paths.target - Path Units. Dec 12 17:25:27.801501 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 12 17:25:27.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.802176 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:25:27.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.803214 systemd[1]: Stopped target slices.target - Slice Units. Dec 12 17:25:27.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.804773 systemd[1]: Stopped target sockets.target - Socket Units. Dec 12 17:25:27.806284 systemd[1]: iscsid.socket: Deactivated successfully. Dec 12 17:25:27.806343 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 12 17:25:27.807692 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 12 17:25:27.807726 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 12 17:25:27.809324 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Dec 12 17:25:27.809364 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:25:27.811111 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 12 17:25:27.811181 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 12 17:25:27.812676 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 12 17:25:27.812719 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 12 17:25:27.827000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.814297 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 12 17:25:27.814348 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 12 17:25:27.816372 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 12 17:25:27.817841 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 12 17:25:27.826557 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 12 17:25:27.826674 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 12 17:25:27.834000 audit: BPF prog-id=6 op=UNLOAD Dec 12 17:25:27.834925 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 12 17:25:27.835879 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 12 17:25:27.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.839024 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 12 17:25:27.840160 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 12 17:25:27.840202 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:25:27.842785 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 12 17:25:27.844000 audit: BPF prog-id=9 op=UNLOAD Dec 12 17:25:27.844622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 12 17:25:27.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.844699 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 12 17:25:27.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.846467 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 12 17:25:27.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.846515 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:25:27.848178 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 12 17:25:27.848233 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 12 17:25:27.850127 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:25:27.863408 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 12 17:25:27.863576 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:25:27.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.865509 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 12 17:25:27.865547 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 12 17:25:27.867079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 12 17:25:27.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.867180 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:25:27.868821 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 12 17:25:27.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.868877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 12 17:25:27.871286 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 12 17:25:27.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.871346 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 12 17:25:27.873887 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 12 17:25:27.873939 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 12 17:25:27.879752 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 12 17:25:27.880747 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 12 17:25:27.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.880812 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:25:27.883610 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 12 17:25:27.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.883677 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:25:27.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.886051 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 12 17:25:27.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.886103 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:27.888526 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 12 17:25:27.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:27.888640 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 12 17:25:27.889955 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 12 17:25:27.891181 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 12 17:25:27.893354 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 12 17:25:27.896019 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 12 17:25:27.914332 systemd[1]: Switching root. Dec 12 17:25:27.956483 systemd-journald[417]: Journal stopped Dec 12 17:25:28.883901 systemd-journald[417]: Received SIGTERM from PID 1 (systemd). Dec 12 17:25:28.883976 kernel: SELinux: policy capability network_peer_controls=1 Dec 12 17:25:28.883994 kernel: SELinux: policy capability open_perms=1 Dec 12 17:25:28.884005 kernel: SELinux: policy capability extended_socket_class=1 Dec 12 17:25:28.884019 kernel: SELinux: policy capability always_check_network=0 Dec 12 17:25:28.884030 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 12 17:25:28.884048 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 12 17:25:28.884058 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 12 17:25:28.884068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 12 17:25:28.884081 kernel: SELinux: policy capability userspace_initial_context=0 Dec 12 17:25:28.884092 systemd[1]: Successfully loaded SELinux policy in 63.770ms. Dec 12 17:25:28.884110 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.909ms. Dec 12 17:25:28.884136 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 12 17:25:28.884150 systemd[1]: Detected virtualization kvm. Dec 12 17:25:28.884161 systemd[1]: Detected architecture arm64. Dec 12 17:25:28.884176 systemd[1]: Detected first boot. Dec 12 17:25:28.884188 systemd[1]: Hostname set to . Dec 12 17:25:28.884198 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Dec 12 17:25:28.884213 zram_generator::config[1237]: No configuration found. Dec 12 17:25:28.884233 kernel: NET: Registered PF_VSOCK protocol family Dec 12 17:25:28.884243 systemd[1]: Populated /etc with preset unit settings. Dec 12 17:25:28.884321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 12 17:25:28.884338 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 12 17:25:28.884349 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 12 17:25:28.884360 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 12 17:25:28.884371 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 12 17:25:28.884384 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 12 17:25:28.884395 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 12 17:25:28.884406 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 12 17:25:28.884417 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 12 17:25:28.884430 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 12 17:25:28.884447 systemd[1]: Created slice user.slice - User and Session Slice. Dec 12 17:25:28.884461 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 12 17:25:28.884476 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 12 17:25:28.884487 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 12 17:25:28.884498 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 12 17:25:28.884509 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 12 17:25:28.884519 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 12 17:25:28.884530 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 12 17:25:28.884547 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 12 17:25:28.884563 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 12 17:25:28.884574 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 12 17:25:28.884585 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 12 17:25:28.884596 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 12 17:25:28.884607 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 12 17:25:28.884622 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 12 17:25:28.884634 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 12 17:25:28.884645 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Dec 12 17:25:28.884658 systemd[1]: Reached target slices.target - Slice Units. Dec 12 17:25:28.884669 systemd[1]: Reached target swap.target - Swaps. Dec 12 17:25:28.884680 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 12 17:25:28.884693 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 12 17:25:28.884705 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 12 17:25:28.884716 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Dec 12 17:25:28.884727 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Dec 12 17:25:28.884738 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 12 17:25:28.884749 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Dec 12 17:25:28.884760 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Dec 12 17:25:28.884771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 12 17:25:28.884783 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 12 17:25:28.884794 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 12 17:25:28.884807 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 12 17:25:28.884822 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 12 17:25:28.884833 systemd[1]: Mounting media.mount - External Media Directory... Dec 12 17:25:28.884849 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 12 17:25:28.884861 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 12 17:25:28.884874 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 12 17:25:28.884885 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 12 17:25:28.884896 systemd[1]: Reached target machines.target - Containers. Dec 12 17:25:28.884907 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 12 17:25:28.884919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:25:28.884930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 12 17:25:28.884942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 12 17:25:28.884954 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:25:28.884965 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:25:28.884976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:25:28.884987 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 12 17:25:28.885000 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:25:28.885011 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 12 17:25:28.885022 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 12 17:25:28.885035 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 12 17:25:28.885046 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 12 17:25:28.885057 systemd[1]: Stopped systemd-fsck-usr.service. Dec 12 17:25:28.885070 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:25:28.885081 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 12 17:25:28.885092 kernel: fuse: init (API version 7.41) Dec 12 17:25:28.885103 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 12 17:25:28.885138 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 12 17:25:28.885151 kernel: ACPI: bus type drm_connector registered Dec 12 17:25:28.885161 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 12 17:25:28.885177 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 12 17:25:28.885188 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 12 17:25:28.885200 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 12 17:25:28.885211 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 12 17:25:28.885222 systemd[1]: Mounted media.mount - External Media Directory. Dec 12 17:25:28.885232 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 12 17:25:28.885243 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 12 17:25:28.885256 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 12 17:25:28.885295 systemd-journald[1309]: Collecting audit messages is enabled. Dec 12 17:25:28.885321 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 12 17:25:28.885333 systemd-journald[1309]: Journal started Dec 12 17:25:28.885356 systemd-journald[1309]: Runtime Journal (/run/log/journal/e891f032f9394d78972bcab34dee018a) is 8M, max 319.5M, 311.5M free. Dec 12 17:25:28.733000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 12 17:25:28.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.827000 audit: BPF prog-id=14 op=UNLOAD Dec 12 17:25:28.827000 audit: BPF prog-id=13 op=UNLOAD Dec 12 17:25:28.828000 audit: BPF prog-id=15 op=LOAD Dec 12 17:25:28.828000 audit: BPF prog-id=16 op=LOAD Dec 12 17:25:28.828000 audit: BPF prog-id=17 op=LOAD Dec 12 17:25:28.881000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 12 17:25:28.881000 audit[1309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffca9dbc90 a2=4000 a3=0 items=0 ppid=1 pid=1309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:28.881000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 12 17:25:28.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.654673 systemd[1]: Queued start job for default target multi-user.target. Dec 12 17:25:28.663068 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 12 17:25:28.663475 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 12 17:25:28.887832 systemd[1]: Started systemd-journald.service - Journal Service. Dec 12 17:25:28.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.890187 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 12 17:25:28.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.891555 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 12 17:25:28.891733 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 12 17:25:28.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.893195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:25:28.893362 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:25:28.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.894709 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:25:28.894864 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:25:28.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.895000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.896180 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:25:28.896353 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:25:28.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.897788 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 12 17:25:28.897947 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 12 17:25:28.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.899264 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:25:28.899455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:25:28.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.900890 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 12 17:25:28.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.902348 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 12 17:25:28.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.904380 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 12 17:25:28.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.905929 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 12 17:25:28.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.918990 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 12 17:25:28.921073 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Dec 12 17:25:28.923321 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 12 17:25:28.925373 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 12 17:25:28.926353 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 12 17:25:28.926393 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 12 17:25:28.928094 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 12 17:25:28.929441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:25:28.929551 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:25:28.937848 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 12 17:25:28.940106 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 12 17:25:28.941149 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:25:28.944261 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 12 17:25:28.945424 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:25:28.946435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 12 17:25:28.949433 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 12 17:25:28.955271 systemd-journald[1309]: Time spent on flushing to /var/log/journal/e891f032f9394d78972bcab34dee018a is 28.476ms for 1814 entries. Dec 12 17:25:28.955271 systemd-journald[1309]: System Journal (/var/log/journal/e891f032f9394d78972bcab34dee018a) is 8M, max 588.1M, 580.1M free. Dec 12 17:25:28.992691 systemd-journald[1309]: Received client request to flush runtime journal. Dec 12 17:25:28.992729 kernel: loop1: detected capacity change from 0 to 109872 Dec 12 17:25:28.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:28.951945 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 12 17:25:28.955461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 12 17:25:28.957921 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 12 17:25:28.968243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 12 17:25:28.969767 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 12 17:25:28.977865 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 12 17:25:28.982328 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 12 17:25:28.984254 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 12 17:25:28.995540 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 12 17:25:28.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.011355 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 12 17:25:29.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.016134 kernel: loop2: detected capacity change from 0 to 100192 Dec 12 17:25:29.018148 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 12 17:25:29.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.020000 audit: BPF prog-id=18 op=LOAD Dec 12 17:25:29.020000 audit: BPF prog-id=19 op=LOAD Dec 12 17:25:29.020000 audit: BPF prog-id=20 op=LOAD Dec 12 17:25:29.022037 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Dec 12 17:25:29.024000 audit: BPF prog-id=21 op=LOAD Dec 12 17:25:29.025330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 12 17:25:29.029262 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 12 17:25:29.038000 audit: BPF prog-id=22 op=LOAD Dec 12 17:25:29.038000 audit: BPF prog-id=23 op=LOAD Dec 12 17:25:29.038000 audit: BPF prog-id=24 op=LOAD Dec 12 17:25:29.039530 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Dec 12 17:25:29.041000 audit: BPF prog-id=25 op=LOAD Dec 12 17:25:29.041000 audit: BPF prog-id=26 op=LOAD Dec 12 17:25:29.041000 audit: BPF prog-id=27 op=LOAD Dec 12 17:25:29.043944 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 12 17:25:29.063816 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Dec 12 17:25:29.063837 systemd-tmpfiles[1375]: ACLs are not supported, ignoring. Dec 12 17:25:29.069283 systemd-nsresourced[1376]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Dec 12 17:25:29.070245 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 12 17:25:29.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.073322 kernel: loop3: detected capacity change from 0 to 1648 Dec 12 17:25:29.073155 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Dec 12 17:25:29.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.090589 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 12 17:25:29.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.097142 kernel: loop4: detected capacity change from 0 to 200800 Dec 12 17:25:29.134628 systemd-oomd[1373]: No swap; memory pressure usage will be degraded Dec 12 17:25:29.135419 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Dec 12 17:25:29.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.142156 kernel: loop5: detected capacity change from 0 to 109872 Dec 12 17:25:29.153602 systemd-resolved[1374]: Positive Trust Anchors: Dec 12 17:25:29.154269 kernel: loop6: detected capacity change from 0 to 100192 Dec 12 17:25:29.153881 systemd-resolved[1374]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 12 17:25:29.153888 systemd-resolved[1374]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Dec 12 17:25:29.153919 systemd-resolved[1374]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 12 17:25:29.163400 systemd-resolved[1374]: Using system hostname 'ci-4515-1-0-e-d121438740'. Dec 12 17:25:29.164950 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 12 17:25:29.166134 kernel: loop7: detected capacity change from 0 to 1648 Dec 12 17:25:29.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.166835 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 12 17:25:29.174137 kernel: loop1: detected capacity change from 0 to 200800 Dec 12 17:25:29.189664 (sd-merge)[1397]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Dec 12 17:25:29.192760 (sd-merge)[1397]: Merged extensions into '/usr'. Dec 12 17:25:29.196698 systemd[1]: Reload requested from client PID 1357 ('systemd-sysext') (unit systemd-sysext.service)... Dec 12 17:25:29.196723 systemd[1]: Reloading... Dec 12 17:25:29.250282 zram_generator::config[1427]: No configuration found. Dec 12 17:25:29.400930 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 12 17:25:29.401213 systemd[1]: Reloading finished in 204 ms. Dec 12 17:25:29.418253 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 12 17:25:29.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.419864 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 12 17:25:29.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.434752 systemd[1]: Starting ensure-sysext.service... Dec 12 17:25:29.436817 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 12 17:25:29.438000 audit: BPF prog-id=8 op=UNLOAD Dec 12 17:25:29.438000 audit: BPF prog-id=7 op=UNLOAD Dec 12 17:25:29.438000 audit: BPF prog-id=28 op=LOAD Dec 12 17:25:29.438000 audit: BPF prog-id=29 op=LOAD Dec 12 17:25:29.439536 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 12 17:25:29.441000 audit: BPF prog-id=30 op=LOAD Dec 12 17:25:29.441000 audit: BPF prog-id=18 op=UNLOAD Dec 12 17:25:29.441000 audit: BPF prog-id=31 op=LOAD Dec 12 17:25:29.441000 audit: BPF prog-id=32 op=LOAD Dec 12 17:25:29.441000 audit: BPF prog-id=19 op=UNLOAD Dec 12 17:25:29.441000 audit: BPF prog-id=20 op=UNLOAD Dec 12 17:25:29.442000 audit: BPF prog-id=33 op=LOAD Dec 12 17:25:29.442000 audit: BPF prog-id=21 op=UNLOAD Dec 12 17:25:29.442000 audit: BPF prog-id=34 op=LOAD Dec 12 17:25:29.442000 audit: BPF prog-id=25 op=UNLOAD Dec 12 17:25:29.442000 audit: BPF prog-id=35 op=LOAD Dec 12 17:25:29.442000 audit: BPF prog-id=36 op=LOAD Dec 12 17:25:29.442000 audit: BPF prog-id=26 op=UNLOAD Dec 12 17:25:29.442000 audit: BPF prog-id=27 op=UNLOAD Dec 12 17:25:29.443000 audit: BPF prog-id=37 op=LOAD Dec 12 17:25:29.443000 audit: BPF prog-id=15 op=UNLOAD Dec 12 17:25:29.443000 audit: BPF prog-id=38 op=LOAD Dec 12 17:25:29.443000 audit: BPF prog-id=39 op=LOAD Dec 12 17:25:29.443000 audit: BPF prog-id=16 op=UNLOAD Dec 12 17:25:29.443000 audit: BPF prog-id=17 op=UNLOAD Dec 12 17:25:29.444000 audit: BPF prog-id=40 op=LOAD Dec 12 17:25:29.444000 audit: BPF prog-id=22 op=UNLOAD Dec 12 17:25:29.444000 audit: BPF prog-id=41 op=LOAD Dec 12 17:25:29.444000 audit: BPF prog-id=42 op=LOAD Dec 12 17:25:29.444000 audit: BPF prog-id=23 op=UNLOAD Dec 12 17:25:29.444000 audit: BPF prog-id=24 op=UNLOAD Dec 12 17:25:29.453725 systemd[1]: Reload requested from client PID 1464 ('systemctl') (unit ensure-sysext.service)... Dec 12 17:25:29.453745 systemd[1]: Reloading... Dec 12 17:25:29.460399 systemd-tmpfiles[1465]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 12 17:25:29.460434 systemd-tmpfiles[1465]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 12 17:25:29.460715 systemd-tmpfiles[1465]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 12 17:25:29.461706 systemd-tmpfiles[1465]: ACLs are not supported, ignoring. Dec 12 17:25:29.461779 systemd-tmpfiles[1465]: ACLs are not supported, ignoring. Dec 12 17:25:29.468710 systemd-udevd[1466]: Using default interface naming scheme 'v257'. Dec 12 17:25:29.472363 systemd-tmpfiles[1465]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:25:29.472373 systemd-tmpfiles[1465]: Skipping /boot Dec 12 17:25:29.479184 systemd-tmpfiles[1465]: Detected autofs mount point /boot during canonicalization of boot. Dec 12 17:25:29.479198 systemd-tmpfiles[1465]: Skipping /boot Dec 12 17:25:29.510154 zram_generator::config[1502]: No configuration found. Dec 12 17:25:29.622147 kernel: mousedev: PS/2 mouse device common for all mice Dec 12 17:25:29.681139 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Dec 12 17:25:29.681230 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Dec 12 17:25:29.681247 kernel: [drm] features: -context_init Dec 12 17:25:29.691144 kernel: [drm] number of scanouts: 1 Dec 12 17:25:29.691231 kernel: [drm] number of cap sets: 0 Dec 12 17:25:29.694144 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Dec 12 17:25:29.706130 kernel: Console: switching to colour frame buffer device 160x50 Dec 12 17:25:29.706213 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Dec 12 17:25:29.715072 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 12 17:25:29.715465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 12 17:25:29.723169 systemd[1]: Reloading finished in 269 ms. Dec 12 17:25:29.736997 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 12 17:25:29.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.739000 audit: BPF prog-id=43 op=LOAD Dec 12 17:25:29.739000 audit: BPF prog-id=44 op=LOAD Dec 12 17:25:29.739000 audit: BPF prog-id=28 op=UNLOAD Dec 12 17:25:29.739000 audit: BPF prog-id=29 op=UNLOAD Dec 12 17:25:29.740000 audit: BPF prog-id=45 op=LOAD Dec 12 17:25:29.740000 audit: BPF prog-id=40 op=UNLOAD Dec 12 17:25:29.740000 audit: BPF prog-id=46 op=LOAD Dec 12 17:25:29.740000 audit: BPF prog-id=47 op=LOAD Dec 12 17:25:29.740000 audit: BPF prog-id=41 op=UNLOAD Dec 12 17:25:29.740000 audit: BPF prog-id=42 op=UNLOAD Dec 12 17:25:29.741000 audit: BPF prog-id=48 op=LOAD Dec 12 17:25:29.741000 audit: BPF prog-id=34 op=UNLOAD Dec 12 17:25:29.741000 audit: BPF prog-id=49 op=LOAD Dec 12 17:25:29.741000 audit: BPF prog-id=50 op=LOAD Dec 12 17:25:29.741000 audit: BPF prog-id=35 op=UNLOAD Dec 12 17:25:29.741000 audit: BPF prog-id=36 op=UNLOAD Dec 12 17:25:29.741000 audit: BPF prog-id=51 op=LOAD Dec 12 17:25:29.741000 audit: BPF prog-id=30 op=UNLOAD Dec 12 17:25:29.742000 audit: BPF prog-id=52 op=LOAD Dec 12 17:25:29.742000 audit: BPF prog-id=53 op=LOAD Dec 12 17:25:29.742000 audit: BPF prog-id=31 op=UNLOAD Dec 12 17:25:29.742000 audit: BPF prog-id=32 op=UNLOAD Dec 12 17:25:29.742000 audit: BPF prog-id=54 op=LOAD Dec 12 17:25:29.742000 audit: BPF prog-id=33 op=UNLOAD Dec 12 17:25:29.743000 audit: BPF prog-id=55 op=LOAD Dec 12 17:25:29.743000 audit: BPF prog-id=37 op=UNLOAD Dec 12 17:25:29.743000 audit: BPF prog-id=56 op=LOAD Dec 12 17:25:29.743000 audit: BPF prog-id=57 op=LOAD Dec 12 17:25:29.743000 audit: BPF prog-id=38 op=UNLOAD Dec 12 17:25:29.743000 audit: BPF prog-id=39 op=UNLOAD Dec 12 17:25:29.755109 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 12 17:25:29.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.782202 systemd[1]: Finished ensure-sysext.service. Dec 12 17:25:29.798899 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:25:29.801649 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 12 17:25:29.802839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 12 17:25:29.824068 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 12 17:25:29.826352 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 12 17:25:29.828492 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 12 17:25:29.831343 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 12 17:25:29.834807 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Dec 12 17:25:29.837428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 12 17:25:29.837549 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Dec 12 17:25:29.839056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 12 17:25:29.841578 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 12 17:25:29.842813 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 12 17:25:29.844238 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 12 17:25:29.847000 audit: BPF prog-id=58 op=LOAD Dec 12 17:25:29.848381 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 12 17:25:29.849418 systemd[1]: Reached target time-set.target - System Time Set. Dec 12 17:25:29.851458 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 12 17:25:29.855880 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 12 17:25:29.858302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 12 17:25:29.858431 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 12 17:25:29.858469 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 12 17:25:29.858874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 12 17:25:29.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.865186 kernel: PTP clock support registered Dec 12 17:25:29.864877 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 12 17:25:29.865252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 12 17:25:29.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.866455 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 12 17:25:29.869291 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 12 17:25:29.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.870954 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 12 17:25:29.871144 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 12 17:25:29.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.872732 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Dec 12 17:25:29.872919 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Dec 12 17:25:29.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.875605 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 12 17:25:29.876000 audit[1605]: SYSTEM_BOOT pid=1605 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.886068 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 12 17:25:29.886223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 12 17:25:29.890328 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 12 17:25:29.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.894176 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 12 17:25:29.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:29.916000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 12 17:25:29.916000 audit[1631]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe25ec1d0 a2=420 a3=0 items=0 ppid=1586 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:29.916000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:25:29.917618 augenrules[1631]: No rules Dec 12 17:25:29.918987 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:25:29.919350 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:25:29.935903 systemd-networkd[1604]: lo: Link UP Dec 12 17:25:29.935918 systemd-networkd[1604]: lo: Gained carrier Dec 12 17:25:29.937047 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 12 17:25:29.938857 systemd-networkd[1604]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:29.938867 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 12 17:25:29.939407 systemd[1]: Reached target network.target - Network. Dec 12 17:25:29.941830 systemd-networkd[1604]: eth0: Link UP Dec 12 17:25:29.942032 systemd-networkd[1604]: eth0: Gained carrier Dec 12 17:25:29.942055 systemd-networkd[1604]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Dec 12 17:25:29.942396 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 12 17:25:29.946978 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 12 17:25:29.958379 systemd-networkd[1604]: eth0: DHCPv4 address 10.0.11.71/25, gateway 10.0.11.1 acquired from 10.0.11.1 Dec 12 17:25:29.968025 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 12 17:25:29.980711 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 12 17:25:29.989346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 12 17:25:29.990835 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 12 17:25:30.439901 ldconfig[1599]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 12 17:25:30.444131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 12 17:25:30.446976 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 12 17:25:30.478537 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 12 17:25:30.479860 systemd[1]: Reached target sysinit.target - System Initialization. Dec 12 17:25:30.482335 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 12 17:25:30.483444 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 12 17:25:30.484752 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 12 17:25:30.485789 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 12 17:25:30.486946 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Dec 12 17:25:30.488195 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Dec 12 17:25:30.489311 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 12 17:25:30.490386 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 12 17:25:30.490426 systemd[1]: Reached target paths.target - Path Units. Dec 12 17:25:30.491204 systemd[1]: Reached target timers.target - Timer Units. Dec 12 17:25:30.493593 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 12 17:25:30.495818 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 12 17:25:30.498554 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 12 17:25:30.499827 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 12 17:25:30.500984 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 12 17:25:30.507265 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 12 17:25:30.508460 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 12 17:25:30.510046 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 12 17:25:30.511152 systemd[1]: Reached target sockets.target - Socket Units. Dec 12 17:25:30.511972 systemd[1]: Reached target basic.target - Basic System. Dec 12 17:25:30.512879 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:25:30.512911 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 12 17:25:30.515758 systemd[1]: Starting chronyd.service - NTP client/server... Dec 12 17:25:30.517407 systemd[1]: Starting containerd.service - containerd container runtime... Dec 12 17:25:30.519295 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 12 17:25:30.521276 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 12 17:25:30.522853 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 12 17:25:30.525482 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 12 17:25:30.526137 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:30.527815 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 12 17:25:30.528777 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 12 17:25:30.529957 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 12 17:25:30.540650 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 12 17:25:30.543236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 12 17:25:30.547211 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 12 17:25:30.548804 jq[1657]: false Dec 12 17:25:30.550531 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 12 17:25:30.551501 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 12 17:25:30.551941 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 12 17:25:30.552523 systemd[1]: Starting update-engine.service - Update Engine... Dec 12 17:25:30.554372 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 12 17:25:30.561260 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 12 17:25:30.562814 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 12 17:25:30.563032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 12 17:25:30.565930 jq[1669]: true Dec 12 17:25:30.566640 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 12 17:25:30.566870 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 12 17:25:30.568202 chronyd[1652]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Dec 12 17:25:30.571790 extend-filesystems[1660]: Found /dev/vda6 Dec 12 17:25:30.572423 chronyd[1652]: Loaded seccomp filter (level 2) Dec 12 17:25:30.577093 extend-filesystems[1660]: Found /dev/vda9 Dec 12 17:25:30.580155 extend-filesystems[1660]: Checking size of /dev/vda9 Dec 12 17:25:30.579192 systemd[1]: Started chronyd.service - NTP client/server. Dec 12 17:25:30.580543 systemd[1]: motdgen.service: Deactivated successfully. Dec 12 17:25:30.580742 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 12 17:25:30.596387 jq[1684]: true Dec 12 17:25:30.597665 tar[1677]: linux-arm64/LICENSE Dec 12 17:25:30.597665 tar[1677]: linux-arm64/helm Dec 12 17:25:30.601761 update_engine[1667]: I20251212 17:25:30.601519 1667 main.cc:92] Flatcar Update Engine starting Dec 12 17:25:30.605124 extend-filesystems[1660]: Resized partition /dev/vda9 Dec 12 17:25:30.613144 extend-filesystems[1707]: resize2fs 1.47.3 (8-Jul-2025) Dec 12 17:25:30.627137 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Dec 12 17:25:30.644600 dbus-daemon[1655]: [system] SELinux support is enabled Dec 12 17:25:30.644833 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 12 17:25:30.647282 systemd-logind[1666]: New seat seat0. Dec 12 17:25:30.647977 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 12 17:25:30.648004 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 12 17:25:30.652655 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 12 17:25:30.652678 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 12 17:25:30.653662 update_engine[1667]: I20251212 17:25:30.652863 1667 update_check_scheduler.cc:74] Next update check in 4m1s Dec 12 17:25:30.653482 systemd-logind[1666]: Watching system buttons on /dev/input/event0 (Power Button) Dec 12 17:25:30.653498 systemd-logind[1666]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Dec 12 17:25:30.654588 systemd[1]: Started systemd-logind.service - User Login Management. Dec 12 17:25:30.657951 systemd[1]: Started update-engine.service - Update Engine. Dec 12 17:25:30.660757 dbus-daemon[1655]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 12 17:25:30.662265 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 12 17:25:30.730327 locksmithd[1724]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 12 17:25:30.765757 bash[1722]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:25:30.769183 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 12 17:25:30.774018 containerd[1692]: time="2025-12-12T17:25:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 12 17:25:30.774453 systemd[1]: Starting sshkeys.service... Dec 12 17:25:30.775841 containerd[1692]: time="2025-12-12T17:25:30.775808640Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Dec 12 17:25:30.794397 containerd[1692]: time="2025-12-12T17:25:30.794341880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.88µs" Dec 12 17:25:30.794483 containerd[1692]: time="2025-12-12T17:25:30.794405400Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 12 17:25:30.794483 containerd[1692]: time="2025-12-12T17:25:30.794463400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 12 17:25:30.794483 containerd[1692]: time="2025-12-12T17:25:30.794477280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 12 17:25:30.794726 containerd[1692]: time="2025-12-12T17:25:30.794692680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 12 17:25:30.794752 containerd[1692]: time="2025-12-12T17:25:30.794727560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:25:30.794835 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 12 17:25:30.794916 containerd[1692]: time="2025-12-12T17:25:30.794862320Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 12 17:25:30.794916 containerd[1692]: time="2025-12-12T17:25:30.794882760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795267 containerd[1692]: time="2025-12-12T17:25:30.795240720Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795296 containerd[1692]: time="2025-12-12T17:25:30.795266000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795333 containerd[1692]: time="2025-12-12T17:25:30.795311160Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795365 containerd[1692]: time="2025-12-12T17:25:30.795330400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795550 containerd[1692]: time="2025-12-12T17:25:30.795520760Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795577 containerd[1692]: time="2025-12-12T17:25:30.795551080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795708 containerd[1692]: time="2025-12-12T17:25:30.795681360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.795932 containerd[1692]: time="2025-12-12T17:25:30.795907480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.796011 containerd[1692]: time="2025-12-12T17:25:30.795991000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 12 17:25:30.796038 containerd[1692]: time="2025-12-12T17:25:30.796009440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 12 17:25:30.796056 containerd[1692]: time="2025-12-12T17:25:30.796035960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 12 17:25:30.796517 containerd[1692]: time="2025-12-12T17:25:30.796486640Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 12 17:25:30.796611 containerd[1692]: time="2025-12-12T17:25:30.796593520Z" level=info msg="metadata content store policy set" policy=shared Dec 12 17:25:30.798092 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 12 17:25:30.815128 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:30.833794 containerd[1692]: time="2025-12-12T17:25:30.833748760Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 12 17:25:30.833877 containerd[1692]: time="2025-12-12T17:25:30.833821360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:25:30.833935 containerd[1692]: time="2025-12-12T17:25:30.833912760Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Dec 12 17:25:30.833935 containerd[1692]: time="2025-12-12T17:25:30.833931200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 12 17:25:30.833988 containerd[1692]: time="2025-12-12T17:25:30.833945280Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 12 17:25:30.833988 containerd[1692]: time="2025-12-12T17:25:30.833966760Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 12 17:25:30.833988 containerd[1692]: time="2025-12-12T17:25:30.833981000Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 12 17:25:30.834038 containerd[1692]: time="2025-12-12T17:25:30.833991080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 12 17:25:30.834038 containerd[1692]: time="2025-12-12T17:25:30.834002280Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 12 17:25:30.834038 containerd[1692]: time="2025-12-12T17:25:30.834014280Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 12 17:25:30.834038 containerd[1692]: time="2025-12-12T17:25:30.834030560Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 12 17:25:30.834103 containerd[1692]: time="2025-12-12T17:25:30.834041520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 12 17:25:30.834103 containerd[1692]: time="2025-12-12T17:25:30.834052720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 12 17:25:30.834103 containerd[1692]: time="2025-12-12T17:25:30.834088080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 12 17:25:30.834913 containerd[1692]: time="2025-12-12T17:25:30.834850080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 12 17:25:30.834939 containerd[1692]: time="2025-12-12T17:25:30.834928240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 12 17:25:30.835020 containerd[1692]: time="2025-12-12T17:25:30.834947320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 12 17:25:30.835051 containerd[1692]: time="2025-12-12T17:25:30.835023880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 12 17:25:30.835051 containerd[1692]: time="2025-12-12T17:25:30.835045040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 12 17:25:30.835084 containerd[1692]: time="2025-12-12T17:25:30.835059720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 12 17:25:30.835102 containerd[1692]: time="2025-12-12T17:25:30.835089680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 12 17:25:30.835128 containerd[1692]: time="2025-12-12T17:25:30.835105320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 12 17:25:30.835147 containerd[1692]: time="2025-12-12T17:25:30.835130360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 12 17:25:30.835147 containerd[1692]: time="2025-12-12T17:25:30.835143800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 12 17:25:30.835186 containerd[1692]: time="2025-12-12T17:25:30.835154560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 12 17:25:30.835257 containerd[1692]: time="2025-12-12T17:25:30.835184520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 12 17:25:30.835316 containerd[1692]: time="2025-12-12T17:25:30.835297840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 12 17:25:30.835343 containerd[1692]: time="2025-12-12T17:25:30.835319920Z" level=info msg="Start snapshots syncer" Dec 12 17:25:30.835373 containerd[1692]: time="2025-12-12T17:25:30.835357360Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 12 17:25:30.835856 containerd[1692]: time="2025-12-12T17:25:30.835808840Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 12 17:25:30.835956 containerd[1692]: time="2025-12-12T17:25:30.835924720Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.835991760Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836243680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836315920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836339760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836351280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836418880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836435320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836446520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836457200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836532640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836597480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836616720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 12 17:25:30.837117 containerd[1692]: time="2025-12-12T17:25:30.836625600Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836634680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836652520Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836673680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836685600Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836698360Z" level=info msg="runtime interface created" Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836703320Z" level=info msg="created NRI interface" Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836710720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836780520Z" level=info msg="Connect containerd service" Dec 12 17:25:30.837332 containerd[1692]: time="2025-12-12T17:25:30.836805200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 12 17:25:30.837987 containerd[1692]: time="2025-12-12T17:25:30.837958840Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:25:30.933143 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Dec 12 17:25:30.939485 containerd[1692]: time="2025-12-12T17:25:30.939433720Z" level=info msg="Start subscribing containerd event" Dec 12 17:25:30.939551 containerd[1692]: time="2025-12-12T17:25:30.939530840Z" level=info msg="Start recovering state" Dec 12 17:25:30.939696 containerd[1692]: time="2025-12-12T17:25:30.939676440Z" level=info msg="Start event monitor" Dec 12 17:25:30.939721 containerd[1692]: time="2025-12-12T17:25:30.939700240Z" level=info msg="Start cni network conf syncer for default" Dec 12 17:25:30.939721 containerd[1692]: time="2025-12-12T17:25:30.939710240Z" level=info msg="Start streaming server" Dec 12 17:25:30.939721 containerd[1692]: time="2025-12-12T17:25:30.939718600Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 12 17:25:30.952216 containerd[1692]: time="2025-12-12T17:25:30.939727160Z" level=info msg="runtime interface starting up..." Dec 12 17:25:30.952216 containerd[1692]: time="2025-12-12T17:25:30.939732680Z" level=info msg="starting plugins..." Dec 12 17:25:30.952216 containerd[1692]: time="2025-12-12T17:25:30.939753320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 12 17:25:30.952216 containerd[1692]: time="2025-12-12T17:25:30.940078960Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 12 17:25:30.952216 containerd[1692]: time="2025-12-12T17:25:30.940158800Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 12 17:25:30.952216 containerd[1692]: time="2025-12-12T17:25:30.940280680Z" level=info msg="containerd successfully booted in 0.166687s" Dec 12 17:25:30.940390 systemd[1]: Started containerd.service - containerd container runtime. Dec 12 17:25:30.955014 extend-filesystems[1707]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 12 17:25:30.955014 extend-filesystems[1707]: old_desc_blocks = 1, new_desc_blocks = 6 Dec 12 17:25:30.955014 extend-filesystems[1707]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Dec 12 17:25:30.960065 extend-filesystems[1660]: Resized filesystem in /dev/vda9 Dec 12 17:25:30.957400 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 12 17:25:30.957630 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 12 17:25:31.046736 tar[1677]: linux-arm64/README.md Dec 12 17:25:31.064569 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 12 17:25:31.543147 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:31.699378 systemd-networkd[1604]: eth0: Gained IPv6LL Dec 12 17:25:31.704541 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 12 17:25:31.708448 systemd[1]: Reached target network-online.target - Network is Online. Dec 12 17:25:31.711493 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:31.713830 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 12 17:25:31.742547 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 12 17:25:31.825146 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:32.024531 sshd_keygen[1688]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 12 17:25:32.043940 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 12 17:25:32.047455 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 12 17:25:32.073677 systemd[1]: issuegen.service: Deactivated successfully. Dec 12 17:25:32.073952 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 12 17:25:32.076745 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 12 17:25:32.101187 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 12 17:25:32.106472 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 12 17:25:32.108671 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 12 17:25:32.110073 systemd[1]: Reached target getty.target - Login Prompts. Dec 12 17:25:32.572324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:32.576378 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:33.038403 kubelet[1798]: E1212 17:25:33.038344 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:33.040744 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:33.040876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:33.041243 systemd[1]: kubelet.service: Consumed 708ms CPU time, 248.9M memory peak. Dec 12 17:25:33.556194 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:33.836139 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:37.566152 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:37.571808 coreos-metadata[1654]: Dec 12 17:25:37.571 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:25:37.587641 coreos-metadata[1654]: Dec 12 17:25:37.587 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 12 17:25:37.846151 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 12 17:25:37.852138 coreos-metadata[1739]: Dec 12 17:25:37.852 WARN failed to locate config-drive, using the metadata service API instead Dec 12 17:25:37.864783 coreos-metadata[1739]: Dec 12 17:25:37.864 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 12 17:25:39.315080 coreos-metadata[1654]: Dec 12 17:25:39.314 INFO Fetch successful Dec 12 17:25:39.315743 coreos-metadata[1654]: Dec 12 17:25:39.315 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 12 17:25:39.477886 coreos-metadata[1739]: Dec 12 17:25:39.477 INFO Fetch successful Dec 12 17:25:39.477886 coreos-metadata[1739]: Dec 12 17:25:39.477 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 12 17:25:39.567938 coreos-metadata[1654]: Dec 12 17:25:39.567 INFO Fetch successful Dec 12 17:25:39.568176 coreos-metadata[1654]: Dec 12 17:25:39.568 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 12 17:25:39.681851 coreos-metadata[1739]: Dec 12 17:25:39.681 INFO Fetch successful Dec 12 17:25:39.965842 unknown[1739]: wrote ssh authorized keys file for user: core Dec 12 17:25:39.998083 update-ssh-keys[1817]: Updated "/home/core/.ssh/authorized_keys" Dec 12 17:25:39.999216 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 12 17:25:40.001368 systemd[1]: Finished sshkeys.service. Dec 12 17:25:40.278697 coreos-metadata[1654]: Dec 12 17:25:40.278 INFO Fetch successful Dec 12 17:25:40.278697 coreos-metadata[1654]: Dec 12 17:25:40.278 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 12 17:25:40.408338 coreos-metadata[1654]: Dec 12 17:25:40.408 INFO Fetch successful Dec 12 17:25:40.408338 coreos-metadata[1654]: Dec 12 17:25:40.408 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 12 17:25:40.542295 coreos-metadata[1654]: Dec 12 17:25:40.542 INFO Fetch successful Dec 12 17:25:40.542295 coreos-metadata[1654]: Dec 12 17:25:40.542 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 12 17:25:40.666319 coreos-metadata[1654]: Dec 12 17:25:40.666 INFO Fetch successful Dec 12 17:25:40.689607 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 12 17:25:40.690003 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 12 17:25:40.690158 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 12 17:25:40.690267 systemd[1]: Startup finished in 2.705s (kernel) + 13.937s (initrd) + 12.667s (userspace) = 29.310s. Dec 12 17:25:42.589306 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 12 17:25:42.590714 systemd[1]: Started sshd@0-10.0.11.71:22-139.178.89.65:50782.service - OpenSSH per-connection server daemon (139.178.89.65:50782). Dec 12 17:25:43.233323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 12 17:25:43.234663 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:43.355780 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:43.359450 (kubelet)[1837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:43.390791 kubelet[1837]: E1212 17:25:43.390717 1837 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:43.393577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:43.393711 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:43.395202 systemd[1]: kubelet.service: Consumed 141ms CPU time, 107.5M memory peak. Dec 12 17:25:43.445923 sshd[1826]: Accepted publickey for core from 139.178.89.65 port 50782 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:43.448388 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:43.454420 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 12 17:25:43.455292 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 12 17:25:43.459804 systemd-logind[1666]: New session 1 of user core. Dec 12 17:25:43.481040 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 12 17:25:43.484999 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 12 17:25:43.506024 (systemd)[1847]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 12 17:25:43.508505 systemd-logind[1666]: New session c1 of user core. Dec 12 17:25:43.635749 systemd[1847]: Queued start job for default target default.target. Dec 12 17:25:43.659264 systemd[1847]: Created slice app.slice - User Application Slice. Dec 12 17:25:43.659295 systemd[1847]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Dec 12 17:25:43.659308 systemd[1847]: Reached target paths.target - Paths. Dec 12 17:25:43.659360 systemd[1847]: Reached target timers.target - Timers. Dec 12 17:25:43.660574 systemd[1847]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 12 17:25:43.661285 systemd[1847]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Dec 12 17:25:43.670106 systemd[1847]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 12 17:25:43.670917 systemd[1847]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Dec 12 17:25:43.671007 systemd[1847]: Reached target sockets.target - Sockets. Dec 12 17:25:43.671049 systemd[1847]: Reached target basic.target - Basic System. Dec 12 17:25:43.671077 systemd[1847]: Reached target default.target - Main User Target. Dec 12 17:25:43.671102 systemd[1847]: Startup finished in 156ms. Dec 12 17:25:43.671346 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 12 17:25:43.672663 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 12 17:25:44.159462 systemd[1]: Started sshd@1-10.0.11.71:22-139.178.89.65:50784.service - OpenSSH per-connection server daemon (139.178.89.65:50784). Dec 12 17:25:44.979753 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 50784 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:44.980961 sshd-session[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:44.984786 systemd-logind[1666]: New session 2 of user core. Dec 12 17:25:44.999438 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 12 17:25:45.455537 sshd[1863]: Connection closed by 139.178.89.65 port 50784 Dec 12 17:25:45.456234 sshd-session[1860]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:45.459799 systemd[1]: sshd@1-10.0.11.71:22-139.178.89.65:50784.service: Deactivated successfully. Dec 12 17:25:45.461326 systemd[1]: session-2.scope: Deactivated successfully. Dec 12 17:25:45.462814 systemd-logind[1666]: Session 2 logged out. Waiting for processes to exit. Dec 12 17:25:45.463909 systemd-logind[1666]: Removed session 2. Dec 12 17:25:45.633693 systemd[1]: Started sshd@2-10.0.11.71:22-139.178.89.65:50792.service - OpenSSH per-connection server daemon (139.178.89.65:50792). Dec 12 17:25:46.463755 sshd[1869]: Accepted publickey for core from 139.178.89.65 port 50792 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:46.464954 sshd-session[1869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:46.469846 systemd-logind[1666]: New session 3 of user core. Dec 12 17:25:46.481456 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 12 17:25:46.942002 sshd[1872]: Connection closed by 139.178.89.65 port 50792 Dec 12 17:25:46.942479 sshd-session[1869]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:46.945923 systemd[1]: sshd@2-10.0.11.71:22-139.178.89.65:50792.service: Deactivated successfully. Dec 12 17:25:46.947392 systemd[1]: session-3.scope: Deactivated successfully. Dec 12 17:25:46.948673 systemd-logind[1666]: Session 3 logged out. Waiting for processes to exit. Dec 12 17:25:46.949931 systemd-logind[1666]: Removed session 3. Dec 12 17:25:47.108416 systemd[1]: Started sshd@3-10.0.11.71:22-139.178.89.65:50798.service - OpenSSH per-connection server daemon (139.178.89.65:50798). Dec 12 17:25:47.949398 sshd[1878]: Accepted publickey for core from 139.178.89.65 port 50798 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:47.950612 sshd-session[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:47.954466 systemd-logind[1666]: New session 4 of user core. Dec 12 17:25:47.966403 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 12 17:25:48.425084 sshd[1881]: Connection closed by 139.178.89.65 port 50798 Dec 12 17:25:48.424933 sshd-session[1878]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:48.428842 systemd[1]: sshd@3-10.0.11.71:22-139.178.89.65:50798.service: Deactivated successfully. Dec 12 17:25:48.430397 systemd[1]: session-4.scope: Deactivated successfully. Dec 12 17:25:48.431132 systemd-logind[1666]: Session 4 logged out. Waiting for processes to exit. Dec 12 17:25:48.432061 systemd-logind[1666]: Removed session 4. Dec 12 17:25:48.597493 systemd[1]: Started sshd@4-10.0.11.71:22-139.178.89.65:50808.service - OpenSSH per-connection server daemon (139.178.89.65:50808). Dec 12 17:25:49.445564 sshd[1887]: Accepted publickey for core from 139.178.89.65 port 50808 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:49.446753 sshd-session[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:49.450723 systemd-logind[1666]: New session 5 of user core. Dec 12 17:25:49.457300 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 12 17:25:49.790784 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 12 17:25:49.791376 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:49.802351 sudo[1891]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:49.962314 sshd[1890]: Connection closed by 139.178.89.65 port 50808 Dec 12 17:25:49.962783 sshd-session[1887]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:49.966998 systemd[1]: sshd@4-10.0.11.71:22-139.178.89.65:50808.service: Deactivated successfully. Dec 12 17:25:49.968702 systemd[1]: session-5.scope: Deactivated successfully. Dec 12 17:25:49.969491 systemd-logind[1666]: Session 5 logged out. Waiting for processes to exit. Dec 12 17:25:49.970712 systemd-logind[1666]: Removed session 5. Dec 12 17:25:50.124364 systemd[1]: Started sshd@5-10.0.11.71:22-139.178.89.65:33448.service - OpenSSH per-connection server daemon (139.178.89.65:33448). Dec 12 17:25:50.958038 sshd[1897]: Accepted publickey for core from 139.178.89.65 port 33448 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:50.959448 sshd-session[1897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:50.963196 systemd-logind[1666]: New session 6 of user core. Dec 12 17:25:50.970475 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 12 17:25:51.277643 sudo[1902]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 12 17:25:51.277893 sudo[1902]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:51.282849 sudo[1902]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:51.288499 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 12 17:25:51.288751 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:51.297668 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 12 17:25:51.330000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:25:51.330696 augenrules[1924]: No rules Dec 12 17:25:51.331924 systemd[1]: audit-rules.service: Deactivated successfully. Dec 12 17:25:51.332250 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 12 17:25:51.333565 kernel: kauditd_printk_skb: 186 callbacks suppressed Dec 12 17:25:51.333630 kernel: audit: type=1305 audit(1765560351.330:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 12 17:25:51.333650 kernel: audit: type=1300 audit(1765560351.330:230): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe1d27c70 a2=420 a3=0 items=0 ppid=1905 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.330000 audit[1924]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe1d27c70 a2=420 a3=0 items=0 ppid=1905 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:51.330000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:25:51.336196 sudo[1901]: pam_unix(sudo:session): session closed for user root Dec 12 17:25:51.337439 kernel: audit: type=1327 audit(1765560351.330:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 12 17:25:51.337475 kernel: audit: type=1130 audit(1765560351.332:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.339732 kernel: audit: type=1131 audit(1765560351.332:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.336000 audit[1901]: USER_END pid=1901 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.344843 kernel: audit: type=1106 audit(1765560351.336:233): pid=1901 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.344934 kernel: audit: type=1104 audit(1765560351.336:234): pid=1901 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.336000 audit[1901]: CRED_DISP pid=1901 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.493241 sshd[1900]: Connection closed by 139.178.89.65 port 33448 Dec 12 17:25:51.493783 sshd-session[1897]: pam_unix(sshd:session): session closed for user core Dec 12 17:25:51.494000 audit[1897]: USER_END pid=1897 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:51.497593 systemd[1]: sshd@5-10.0.11.71:22-139.178.89.65:33448.service: Deactivated successfully. Dec 12 17:25:51.494000 audit[1897]: CRED_DISP pid=1897 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:51.499080 systemd[1]: session-6.scope: Deactivated successfully. Dec 12 17:25:51.499888 systemd-logind[1666]: Session 6 logged out. Waiting for processes to exit. Dec 12 17:25:51.500766 systemd-logind[1666]: Removed session 6. Dec 12 17:25:51.501639 kernel: audit: type=1106 audit(1765560351.494:235): pid=1897 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:51.501679 kernel: audit: type=1104 audit(1765560351.494:236): pid=1897 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:51.501705 kernel: audit: type=1131 audit(1765560351.496:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.11.71:22-139.178.89.65:33448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.11.71:22-139.178.89.65:33448 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.11.71:22-139.178.89.65:33454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:51.665141 systemd[1]: Started sshd@6-10.0.11.71:22-139.178.89.65:33454.service - OpenSSH per-connection server daemon (139.178.89.65:33454). Dec 12 17:25:52.481000 audit[1933]: USER_ACCT pid=1933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:52.481919 sshd[1933]: Accepted publickey for core from 139.178.89.65 port 33454 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:25:52.482000 audit[1933]: CRED_ACQ pid=1933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:52.482000 audit[1933]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc12e030 a2=3 a3=0 items=0 ppid=1 pid=1933 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:52.482000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:25:52.483098 sshd-session[1933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:25:52.487181 systemd-logind[1666]: New session 7 of user core. Dec 12 17:25:52.495518 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 12 17:25:52.497000 audit[1933]: USER_START pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:52.498000 audit[1936]: CRED_ACQ pid=1936 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:25:52.801000 audit[1937]: USER_ACCT pid=1937 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:52.801855 sudo[1937]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 12 17:25:52.801000 audit[1937]: CRED_REFR pid=1937 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:52.802107 sudo[1937]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 12 17:25:52.803000 audit[1937]: USER_START pid=1937 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:25:53.152529 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 12 17:25:53.172404 (dockerd)[1959]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 12 17:25:53.417430 dockerd[1959]: time="2025-12-12T17:25:53.417302600Z" level=info msg="Starting up" Dec 12 17:25:53.418902 dockerd[1959]: time="2025-12-12T17:25:53.418875800Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 12 17:25:53.422796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 12 17:25:53.424124 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:25:53.429380 dockerd[1959]: time="2025-12-12T17:25:53.429342520Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 12 17:25:53.507410 dockerd[1959]: time="2025-12-12T17:25:53.507041960Z" level=info msg="Loading containers: start." Dec 12 17:25:53.522146 kernel: Initializing XFRM netlink socket Dec 12 17:25:53.569549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:25:53.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:53.573668 (kubelet)[2004]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:25:53.584000 audit[2026]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.584000 audit[2026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc7fefae0 a2=0 a3=0 items=0 ppid=1959 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.584000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:25:53.588000 audit[2029]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.588000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffee989430 a2=0 a3=0 items=0 ppid=1959 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:25:53.589000 audit[2031]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.589000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca0ed7f0 a2=0 a3=0 items=0 ppid=1959 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.589000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:25:53.591000 audit[2033]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.591000 audit[2033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc91bcd0 a2=0 a3=0 items=0 ppid=1959 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.591000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:25:53.593000 audit[2035]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.593000 audit[2035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd53e1070 a2=0 a3=0 items=0 ppid=1959 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.593000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:25:53.595000 audit[2037]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.595000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcf831850 a2=0 a3=0 items=0 ppid=1959 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.595000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:53.597000 audit[2039]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.597000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc8df95c0 a2=0 a3=0 items=0 ppid=1959 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.597000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:25:53.606794 kubelet[2004]: E1212 17:25:53.606742 2004 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:25:53.608966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:25:53.609093 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:25:53.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:25:53.609682 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.6M memory peak. Dec 12 17:25:53.600000 audit[2041]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.600000 audit[2041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff5418c00 a2=0 a3=0 items=0 ppid=1959 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:25:53.636000 audit[2045]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.636000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffce946a90 a2=0 a3=0 items=0 ppid=1959 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.636000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 12 17:25:53.638000 audit[2047]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.638000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd52e7e00 a2=0 a3=0 items=0 ppid=1959 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.638000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:25:53.640000 audit[2049]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.640000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc54a50e0 a2=0 a3=0 items=0 ppid=1959 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.640000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:25:53.641000 audit[2051]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.641000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc1cf9b30 a2=0 a3=0 items=0 ppid=1959 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.641000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:53.643000 audit[2053]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.643000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd7edfa40 a2=0 a3=0 items=0 ppid=1959 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:25:53.678000 audit[2083]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.678000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdf87fe20 a2=0 a3=0 items=0 ppid=1959 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 12 17:25:53.680000 audit[2085]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.680000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe25d7810 a2=0 a3=0 items=0 ppid=1959 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.680000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 12 17:25:53.682000 audit[2087]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.682000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2e472f0 a2=0 a3=0 items=0 ppid=1959 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.682000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Dec 12 17:25:53.684000 audit[2089]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.684000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2096850 a2=0 a3=0 items=0 ppid=1959 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Dec 12 17:25:53.685000 audit[2091]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.685000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd81af620 a2=0 a3=0 items=0 ppid=1959 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.685000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Dec 12 17:25:53.687000 audit[2093]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.687000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffea347b50 a2=0 a3=0 items=0 ppid=1959 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.687000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:53.689000 audit[2095]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.689000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe6147360 a2=0 a3=0 items=0 ppid=1959 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.689000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:25:53.691000 audit[2097]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.691000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd0243c30 a2=0 a3=0 items=0 ppid=1959 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.691000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 12 17:25:53.694000 audit[2099]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.694000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffec74d6b0 a2=0 a3=0 items=0 ppid=1959 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.694000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Dec 12 17:25:53.696000 audit[2101]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.696000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffed319b60 a2=0 a3=0 items=0 ppid=1959 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.696000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Dec 12 17:25:53.698000 audit[2103]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.698000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd1777ca0 a2=0 a3=0 items=0 ppid=1959 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.698000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Dec 12 17:25:53.700000 audit[2105]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.700000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff0980290 a2=0 a3=0 items=0 ppid=1959 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.700000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 12 17:25:53.702000 audit[2107]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.702000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffd0544060 a2=0 a3=0 items=0 ppid=1959 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.702000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Dec 12 17:25:53.707000 audit[2112]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.707000 audit[2112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff8ba0320 a2=0 a3=0 items=0 ppid=1959 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.707000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:25:53.709000 audit[2114]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.709000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd1997ad0 a2=0 a3=0 items=0 ppid=1959 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.709000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:25:53.711000 audit[2116]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.711000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffe0cba940 a2=0 a3=0 items=0 ppid=1959 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.711000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:25:53.712000 audit[2118]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.712000 audit[2118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdefe2f90 a2=0 a3=0 items=0 ppid=1959 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 12 17:25:53.714000 audit[2120]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.714000 audit[2120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe303c580 a2=0 a3=0 items=0 ppid=1959 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 12 17:25:53.716000 audit[2122]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:25:53.716000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffb62b930 a2=0 a3=0 items=0 ppid=1959 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 12 17:25:53.737000 audit[2127]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.737000 audit[2127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff28bf750 a2=0 a3=0 items=0 ppid=1959 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.737000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 12 17:25:53.738000 audit[2129]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.738000 audit[2129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc0743b50 a2=0 a3=0 items=0 ppid=1959 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.738000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 12 17:25:53.746000 audit[2137]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.746000 audit[2137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffe4dbbae0 a2=0 a3=0 items=0 ppid=1959 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.746000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Dec 12 17:25:53.756000 audit[2143]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.756000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe88bc180 a2=0 a3=0 items=0 ppid=1959 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.756000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Dec 12 17:25:53.758000 audit[2145]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.758000 audit[2145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffffdadb110 a2=0 a3=0 items=0 ppid=1959 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.758000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 12 17:25:53.760000 audit[2147]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.760000 audit[2147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffedb0d430 a2=0 a3=0 items=0 ppid=1959 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Dec 12 17:25:53.761000 audit[2149]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.761000 audit[2149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffda2221b0 a2=0 a3=0 items=0 ppid=1959 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.761000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 12 17:25:53.763000 audit[2151]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:25:53.763000 audit[2151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffec909bf0 a2=0 a3=0 items=0 ppid=1959 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:25:53.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 12 17:25:53.764088 systemd-networkd[1604]: docker0: Link UP Dec 12 17:25:53.768748 dockerd[1959]: time="2025-12-12T17:25:53.768696280Z" level=info msg="Loading containers: done." Dec 12 17:25:53.782409 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2022697489-merged.mount: Deactivated successfully. Dec 12 17:25:53.799802 dockerd[1959]: time="2025-12-12T17:25:53.799722800Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 12 17:25:53.799949 dockerd[1959]: time="2025-12-12T17:25:53.799809720Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 12 17:25:53.800002 dockerd[1959]: time="2025-12-12T17:25:53.799983920Z" level=info msg="Initializing buildkit" Dec 12 17:25:53.825688 dockerd[1959]: time="2025-12-12T17:25:53.825640240Z" level=info msg="Completed buildkit initialization" Dec 12 17:25:53.830215 dockerd[1959]: time="2025-12-12T17:25:53.830163160Z" level=info msg="Daemon has completed initialization" Dec 12 17:25:53.830384 dockerd[1959]: time="2025-12-12T17:25:53.830256040Z" level=info msg="API listen on /run/docker.sock" Dec 12 17:25:53.830630 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 12 17:25:53.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:25:54.357801 chronyd[1652]: Selected source PHC0 Dec 12 17:25:54.882237 containerd[1692]: time="2025-12-12T17:25:54.882200251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Dec 12 17:25:55.776808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277157178.mount: Deactivated successfully. Dec 12 17:25:56.359017 containerd[1692]: time="2025-12-12T17:25:56.358954714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:56.361007 containerd[1692]: time="2025-12-12T17:25:56.360953821Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=22974850" Dec 12 17:25:56.362008 containerd[1692]: time="2025-12-12T17:25:56.361964099Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:56.365086 containerd[1692]: time="2025-12-12T17:25:56.365050748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:56.366082 containerd[1692]: time="2025-12-12T17:25:56.366053790Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 1.483815969s" Dec 12 17:25:56.366126 containerd[1692]: time="2025-12-12T17:25:56.366090754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Dec 12 17:25:56.366861 containerd[1692]: time="2025-12-12T17:25:56.366838907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Dec 12 17:25:57.380010 containerd[1692]: time="2025-12-12T17:25:57.379949135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:57.381057 containerd[1692]: time="2025-12-12T17:25:57.380551515Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19127323" Dec 12 17:25:57.381783 containerd[1692]: time="2025-12-12T17:25:57.381751724Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:57.384798 containerd[1692]: time="2025-12-12T17:25:57.384752183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:57.386148 containerd[1692]: time="2025-12-12T17:25:57.386097126Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.019227235s" Dec 12 17:25:57.386148 containerd[1692]: time="2025-12-12T17:25:57.386140200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Dec 12 17:25:57.386932 containerd[1692]: time="2025-12-12T17:25:57.386884974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Dec 12 17:25:58.276205 containerd[1692]: time="2025-12-12T17:25:58.276108948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:58.277967 containerd[1692]: time="2025-12-12T17:25:58.277918254Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=0" Dec 12 17:25:58.278943 containerd[1692]: time="2025-12-12T17:25:58.278896379Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:58.281860 containerd[1692]: time="2025-12-12T17:25:58.281802804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:58.282792 containerd[1692]: time="2025-12-12T17:25:58.282768449Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 895.851668ms" Dec 12 17:25:58.282975 containerd[1692]: time="2025-12-12T17:25:58.282870023Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Dec 12 17:25:58.283443 containerd[1692]: time="2025-12-12T17:25:58.283354969Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Dec 12 17:25:59.215309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3227900559.mount: Deactivated successfully. Dec 12 17:25:59.421334 containerd[1692]: time="2025-12-12T17:25:59.421263837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:59.422839 containerd[1692]: time="2025-12-12T17:25:59.422784108Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=0" Dec 12 17:25:59.423848 containerd[1692]: time="2025-12-12T17:25:59.423812890Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:59.426693 containerd[1692]: time="2025-12-12T17:25:59.426648811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:25:59.427778 containerd[1692]: time="2025-12-12T17:25:59.427740556Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.144356728s" Dec 12 17:25:59.427830 containerd[1692]: time="2025-12-12T17:25:59.427775787Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Dec 12 17:25:59.428539 containerd[1692]: time="2025-12-12T17:25:59.428503314Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Dec 12 17:26:00.018015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582531056.mount: Deactivated successfully. Dec 12 17:26:00.531429 containerd[1692]: time="2025-12-12T17:26:00.531359335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:00.533098 containerd[1692]: time="2025-12-12T17:26:00.533045994Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=130" Dec 12 17:26:00.534267 containerd[1692]: time="2025-12-12T17:26:00.534231219Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:00.537643 containerd[1692]: time="2025-12-12T17:26:00.537601818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:00.539178 containerd[1692]: time="2025-12-12T17:26:00.539143714Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.110605056s" Dec 12 17:26:00.539218 containerd[1692]: time="2025-12-12T17:26:00.539179505Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Dec 12 17:26:00.539803 containerd[1692]: time="2025-12-12T17:26:00.539770637Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Dec 12 17:26:01.069692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3096352097.mount: Deactivated successfully. Dec 12 17:26:01.075810 containerd[1692]: time="2025-12-12T17:26:01.075765415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:01.076807 containerd[1692]: time="2025-12-12T17:26:01.076616539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Dec 12 17:26:01.077741 containerd[1692]: time="2025-12-12T17:26:01.077711585Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:01.080138 containerd[1692]: time="2025-12-12T17:26:01.080078717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:01.080888 containerd[1692]: time="2025-12-12T17:26:01.080855121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 541.053489ms" Dec 12 17:26:01.081006 containerd[1692]: time="2025-12-12T17:26:01.080989762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Dec 12 17:26:01.081493 containerd[1692]: time="2025-12-12T17:26:01.081474604Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Dec 12 17:26:01.752336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1486197702.mount: Deactivated successfully. Dec 12 17:26:03.186225 containerd[1692]: time="2025-12-12T17:26:03.186146864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:03.187468 containerd[1692]: time="2025-12-12T17:26:03.187410389Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=85821047" Dec 12 17:26:03.192856 containerd[1692]: time="2025-12-12T17:26:03.192800011Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:03.195965 containerd[1692]: time="2025-12-12T17:26:03.195928743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:03.196959 containerd[1692]: time="2025-12-12T17:26:03.196917667Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.115336622s" Dec 12 17:26:03.196959 containerd[1692]: time="2025-12-12T17:26:03.196953187Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Dec 12 17:26:03.624683 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 12 17:26:03.626982 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:03.753962 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:03.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:03.755531 kernel: kauditd_printk_skb: 134 callbacks suppressed Dec 12 17:26:03.755581 kernel: audit: type=1130 audit(1765560363.753:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:03.758983 (kubelet)[2408]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 12 17:26:03.790636 kubelet[2408]: E1212 17:26:03.790586 2408 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 12 17:26:03.792774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 12 17:26:03.792901 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 12 17:26:03.793536 systemd[1]: kubelet.service: Consumed 138ms CPU time, 111.3M memory peak. Dec 12 17:26:03.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:03.798167 kernel: audit: type=1131 audit(1765560363.792:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:08.501500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:08.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.501755 systemd[1]: kubelet.service: Consumed 138ms CPU time, 111.3M memory peak. Dec 12 17:26:08.503638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:08.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.506831 kernel: audit: type=1130 audit(1765560368.500:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.506894 kernel: audit: type=1131 audit(1765560368.500:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.528103 systemd[1]: Reload requested from client PID 2423 ('systemctl') (unit session-7.scope)... Dec 12 17:26:08.528129 systemd[1]: Reloading... Dec 12 17:26:08.591139 zram_generator::config[2465]: No configuration found. Dec 12 17:26:08.777429 systemd[1]: Reloading finished in 249 ms. Dec 12 17:26:08.792631 kernel: audit: type=1334 audit(1765560368.788:294): prog-id=63 op=LOAD Dec 12 17:26:08.792720 kernel: audit: type=1334 audit(1765560368.788:295): prog-id=48 op=UNLOAD Dec 12 17:26:08.792752 kernel: audit: type=1334 audit(1765560368.788:296): prog-id=64 op=LOAD Dec 12 17:26:08.788000 audit: BPF prog-id=63 op=LOAD Dec 12 17:26:08.788000 audit: BPF prog-id=48 op=UNLOAD Dec 12 17:26:08.788000 audit: BPF prog-id=64 op=LOAD Dec 12 17:26:08.788000 audit: BPF prog-id=65 op=LOAD Dec 12 17:26:08.793702 kernel: audit: type=1334 audit(1765560368.788:297): prog-id=65 op=LOAD Dec 12 17:26:08.788000 audit: BPF prog-id=49 op=UNLOAD Dec 12 17:26:08.794543 kernel: audit: type=1334 audit(1765560368.788:298): prog-id=49 op=UNLOAD Dec 12 17:26:08.788000 audit: BPF prog-id=50 op=UNLOAD Dec 12 17:26:08.795382 kernel: audit: type=1334 audit(1765560368.788:299): prog-id=50 op=UNLOAD Dec 12 17:26:08.789000 audit: BPF prog-id=66 op=LOAD Dec 12 17:26:08.796189 kernel: audit: type=1334 audit(1765560368.789:300): prog-id=66 op=LOAD Dec 12 17:26:08.789000 audit: BPF prog-id=51 op=UNLOAD Dec 12 17:26:08.797197 kernel: audit: type=1334 audit(1765560368.789:301): prog-id=51 op=UNLOAD Dec 12 17:26:08.789000 audit: BPF prog-id=67 op=LOAD Dec 12 17:26:08.798166 kernel: audit: type=1334 audit(1765560368.789:302): prog-id=67 op=LOAD Dec 12 17:26:08.790000 audit: BPF prog-id=68 op=LOAD Dec 12 17:26:08.790000 audit: BPF prog-id=52 op=UNLOAD Dec 12 17:26:08.790000 audit: BPF prog-id=53 op=UNLOAD Dec 12 17:26:08.791000 audit: BPF prog-id=69 op=LOAD Dec 12 17:26:08.799126 kernel: audit: type=1334 audit(1765560368.790:303): prog-id=68 op=LOAD Dec 12 17:26:08.791000 audit: BPF prog-id=60 op=UNLOAD Dec 12 17:26:08.791000 audit: BPF prog-id=70 op=LOAD Dec 12 17:26:08.791000 audit: BPF prog-id=71 op=LOAD Dec 12 17:26:08.791000 audit: BPF prog-id=61 op=UNLOAD Dec 12 17:26:08.791000 audit: BPF prog-id=62 op=UNLOAD Dec 12 17:26:08.792000 audit: BPF prog-id=72 op=LOAD Dec 12 17:26:08.795000 audit: BPF prog-id=55 op=UNLOAD Dec 12 17:26:08.795000 audit: BPF prog-id=73 op=LOAD Dec 12 17:26:08.795000 audit: BPF prog-id=74 op=LOAD Dec 12 17:26:08.795000 audit: BPF prog-id=56 op=UNLOAD Dec 12 17:26:08.795000 audit: BPF prog-id=57 op=UNLOAD Dec 12 17:26:08.796000 audit: BPF prog-id=75 op=LOAD Dec 12 17:26:08.796000 audit: BPF prog-id=45 op=UNLOAD Dec 12 17:26:08.796000 audit: BPF prog-id=76 op=LOAD Dec 12 17:26:08.797000 audit: BPF prog-id=77 op=LOAD Dec 12 17:26:08.797000 audit: BPF prog-id=46 op=UNLOAD Dec 12 17:26:08.797000 audit: BPF prog-id=47 op=UNLOAD Dec 12 17:26:08.798000 audit: BPF prog-id=78 op=LOAD Dec 12 17:26:08.798000 audit: BPF prog-id=54 op=UNLOAD Dec 12 17:26:08.798000 audit: BPF prog-id=79 op=LOAD Dec 12 17:26:08.798000 audit: BPF prog-id=80 op=LOAD Dec 12 17:26:08.798000 audit: BPF prog-id=43 op=UNLOAD Dec 12 17:26:08.798000 audit: BPF prog-id=44 op=UNLOAD Dec 12 17:26:08.799000 audit: BPF prog-id=81 op=LOAD Dec 12 17:26:08.799000 audit: BPF prog-id=59 op=UNLOAD Dec 12 17:26:08.801000 audit: BPF prog-id=82 op=LOAD Dec 12 17:26:08.801000 audit: BPF prog-id=58 op=UNLOAD Dec 12 17:26:08.816985 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 12 17:26:08.817061 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 12 17:26:08.817359 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:08.817412 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95M memory peak. Dec 12 17:26:08.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 12 17:26:08.818882 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:08.942863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:08.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:08.948130 (kubelet)[2516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:08.980892 kubelet[2516]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:08.980892 kubelet[2516]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:08.981695 kubelet[2516]: I1212 17:26:08.981648 2516 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:09.703475 kubelet[2516]: I1212 17:26:09.703428 2516 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:26:09.703475 kubelet[2516]: I1212 17:26:09.703461 2516 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:09.705845 kubelet[2516]: I1212 17:26:09.705820 2516 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:26:09.705845 kubelet[2516]: I1212 17:26:09.705841 2516 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:09.706079 kubelet[2516]: I1212 17:26:09.706055 2516 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:26:09.714754 kubelet[2516]: E1212 17:26:09.714682 2516 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.11.71:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.11.71:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 12 17:26:09.715727 kubelet[2516]: I1212 17:26:09.715655 2516 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:09.718446 kubelet[2516]: I1212 17:26:09.718428 2516 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:09.720781 kubelet[2516]: I1212 17:26:09.720756 2516 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:26:09.721017 kubelet[2516]: I1212 17:26:09.720982 2516 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:09.721198 kubelet[2516]: I1212 17:26:09.721008 2516 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-e-d121438740","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:09.721198 kubelet[2516]: I1212 17:26:09.721193 2516 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:09.721311 kubelet[2516]: I1212 17:26:09.721202 2516 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:26:09.721311 kubelet[2516]: I1212 17:26:09.721289 2516 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:26:09.723607 kubelet[2516]: I1212 17:26:09.723585 2516 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:09.726474 kubelet[2516]: I1212 17:26:09.726446 2516 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:26:09.726474 kubelet[2516]: I1212 17:26:09.726473 2516 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:09.726540 kubelet[2516]: I1212 17:26:09.726496 2516 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:26:09.726540 kubelet[2516]: I1212 17:26:09.726506 2516 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:09.727206 kubelet[2516]: E1212 17:26:09.727148 2516 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.11.71:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-e-d121438740&limit=500&resourceVersion=0\": dial tcp 10.0.11.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 12 17:26:09.727284 kubelet[2516]: E1212 17:26:09.727261 2516 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.11.71:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.11.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 12 17:26:09.728133 kubelet[2516]: I1212 17:26:09.727760 2516 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:09.728437 kubelet[2516]: I1212 17:26:09.728407 2516 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:26:09.728480 kubelet[2516]: I1212 17:26:09.728441 2516 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:26:09.728503 kubelet[2516]: W1212 17:26:09.728481 2516 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 12 17:26:09.731193 kubelet[2516]: I1212 17:26:09.731173 2516 server.go:1262] "Started kubelet" Dec 12 17:26:09.731746 kubelet[2516]: I1212 17:26:09.731646 2516 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:09.731746 kubelet[2516]: I1212 17:26:09.731699 2516 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:26:09.732512 kubelet[2516]: I1212 17:26:09.732457 2516 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:09.732748 kubelet[2516]: I1212 17:26:09.732723 2516 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:09.733264 kubelet[2516]: I1212 17:26:09.732737 2516 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:09.737993 kubelet[2516]: I1212 17:26:09.737956 2516 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:09.738888 kubelet[2516]: I1212 17:26:09.738866 2516 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:26:09.739965 kubelet[2516]: E1212 17:26:09.739567 2516 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515-1-0-e-d121438740\" not found" Dec 12 17:26:09.739965 kubelet[2516]: I1212 17:26:09.739684 2516 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:26:09.740277 kubelet[2516]: I1212 17:26:09.740253 2516 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:26:09.740354 kubelet[2516]: I1212 17:26:09.740340 2516 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:26:09.741032 kubelet[2516]: E1212 17:26:09.740998 2516 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.11.71:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.11.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 12 17:26:09.741990 kubelet[2516]: E1212 17:26:09.741808 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.11.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-d121438740?timeout=10s\": dial tcp 10.0.11.71:6443: connect: connection refused" interval="200ms" Dec 12 17:26:09.741990 kubelet[2516]: E1212 17:26:09.741962 2516 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:26:09.742287 kubelet[2516]: I1212 17:26:09.742263 2516 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:26:09.742382 kubelet[2516]: I1212 17:26:09.742359 2516 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:09.742000 audit[2533]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.742000 audit[2533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdeaeef70 a2=0 a3=0 items=0 ppid=2516 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:09.744385 kubelet[2516]: E1212 17:26:09.742577 2516 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.11.71:6443/api/v1/namespaces/default/events\": dial tcp 10.0.11.71:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-e-d121438740.188087cd4b54f7e6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-e-d121438740,UID:ci-4515-1-0-e-d121438740,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-e-d121438740,},FirstTimestamp:2025-12-12 17:26:09.731139558 +0000 UTC m=+0.780288286,LastTimestamp:2025-12-12 17:26:09.731139558 +0000 UTC m=+0.780288286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-e-d121438740,}" Dec 12 17:26:09.744610 kubelet[2516]: I1212 17:26:09.744584 2516 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:26:09.743000 audit[2534]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.743000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe745efd0 a2=0 a3=0 items=0 ppid=2516 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:09.748000 audit[2536]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.748000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffee829e80 a2=0 a3=0 items=0 ppid=2516 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:09.750000 audit[2538]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.750000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe4f4bcf0 a2=0 a3=0 items=0 ppid=2516 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:09.754560 kubelet[2516]: I1212 17:26:09.754539 2516 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:09.754560 kubelet[2516]: I1212 17:26:09.754557 2516 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:09.754655 kubelet[2516]: I1212 17:26:09.754575 2516 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:09.756000 audit[2541]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.756000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffed1e0bc0 a2=0 a3=0 items=0 ppid=2516 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.756000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F380000002D2D737263003132372E Dec 12 17:26:09.758200 kubelet[2516]: I1212 17:26:09.758179 2516 policy_none.go:49] "None policy: Start" Dec 12 17:26:09.758243 kubelet[2516]: I1212 17:26:09.758206 2516 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:26:09.758243 kubelet[2516]: I1212 17:26:09.758218 2516 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:26:09.758910 kubelet[2516]: I1212 17:26:09.758872 2516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:09.758000 audit[2543]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:09.758000 audit[2543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcdd68710 a2=0 a3=0 items=0 ppid=2516 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.758000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 12 17:26:09.759809 kubelet[2516]: I1212 17:26:09.759787 2516 policy_none.go:47] "Start" Dec 12 17:26:09.759000 audit[2544]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.759000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8fc4780 a2=0 a3=0 items=0 ppid=2516 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:09.761159 kubelet[2516]: I1212 17:26:09.759787 2516 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:09.761210 kubelet[2516]: I1212 17:26:09.761168 2516 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:26:09.761210 kubelet[2516]: I1212 17:26:09.761189 2516 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:26:09.761250 kubelet[2516]: E1212 17:26:09.761223 2516 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:09.760000 audit[2545]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.760000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff3de7960 a2=0 a3=0 items=0 ppid=2516 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:09.760000 audit[2546]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:09.760000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc38c8f90 a2=0 a3=0 items=0 ppid=2516 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.760000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 12 17:26:09.761000 audit[2547]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:09.761000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe01d5ba0 a2=0 a3=0 items=0 ppid=2516 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 12 17:26:09.762000 audit[2548]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:09.762000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc49d0bd0 a2=0 a3=0 items=0 ppid=2516 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:09.764264 kubelet[2516]: E1212 17:26:09.764231 2516 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.11.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.11.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:26:09.763000 audit[2549]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:09.763000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe83f3b70 a2=0 a3=0 items=0 ppid=2516 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:09.763000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 12 17:26:09.767692 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 12 17:26:09.786310 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 12 17:26:09.789871 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 12 17:26:09.810941 kubelet[2516]: E1212 17:26:09.810895 2516 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:26:09.811788 kubelet[2516]: I1212 17:26:09.811556 2516 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:09.811788 kubelet[2516]: I1212 17:26:09.811574 2516 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:09.811889 kubelet[2516]: I1212 17:26:09.811817 2516 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:09.813039 kubelet[2516]: E1212 17:26:09.813000 2516 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:09.813039 kubelet[2516]: E1212 17:26:09.813043 2516 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-e-d121438740\" not found" Dec 12 17:26:09.872250 systemd[1]: Created slice kubepods-burstable-pod46ffb5544025851dfbe41cd9f386496d.slice - libcontainer container kubepods-burstable-pod46ffb5544025851dfbe41cd9f386496d.slice. Dec 12 17:26:09.885508 kubelet[2516]: E1212 17:26:09.885456 2516 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:09.887891 systemd[1]: Created slice kubepods-burstable-poda1e59e071b4bcd3dbfa392dc661cd89f.slice - libcontainer container kubepods-burstable-poda1e59e071b4bcd3dbfa392dc661cd89f.slice. Dec 12 17:26:09.898504 kubelet[2516]: E1212 17:26:09.898468 2516 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:09.900798 systemd[1]: Created slice kubepods-burstable-podd7cd95789eb9cf58ea82adec21e4106b.slice - libcontainer container kubepods-burstable-podd7cd95789eb9cf58ea82adec21e4106b.slice. Dec 12 17:26:09.902412 kubelet[2516]: E1212 17:26:09.902369 2516 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:09.913877 kubelet[2516]: I1212 17:26:09.913843 2516 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:09.914306 kubelet[2516]: E1212 17:26:09.914272 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.11.71:6443/api/v1/nodes\": dial tcp 10.0.11.71:6443: connect: connection refused" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941640 kubelet[2516]: I1212 17:26:09.941603 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd95789eb9cf58ea82adec21e4106b-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-e-d121438740\" (UID: \"d7cd95789eb9cf58ea82adec21e4106b\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941640 kubelet[2516]: I1212 17:26:09.941640 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7cd95789eb9cf58ea82adec21e4106b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-e-d121438740\" (UID: \"d7cd95789eb9cf58ea82adec21e4106b\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941752 kubelet[2516]: I1212 17:26:09.941661 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941752 kubelet[2516]: I1212 17:26:09.941676 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941752 kubelet[2516]: I1212 17:26:09.941691 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941752 kubelet[2516]: I1212 17:26:09.941706 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd95789eb9cf58ea82adec21e4106b-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-e-d121438740\" (UID: \"d7cd95789eb9cf58ea82adec21e4106b\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941752 kubelet[2516]: I1212 17:26:09.941720 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941860 kubelet[2516]: I1212 17:26:09.941775 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.941860 kubelet[2516]: I1212 17:26:09.941815 2516 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1e59e071b4bcd3dbfa392dc661cd89f-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-e-d121438740\" (UID: \"a1e59e071b4bcd3dbfa392dc661cd89f\") " pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:09.942293 kubelet[2516]: E1212 17:26:09.942247 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.11.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-d121438740?timeout=10s\": dial tcp 10.0.11.71:6443: connect: connection refused" interval="400ms" Dec 12 17:26:10.116636 kubelet[2516]: I1212 17:26:10.116592 2516 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:10.117103 kubelet[2516]: E1212 17:26:10.116956 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.11.71:6443/api/v1/nodes\": dial tcp 10.0.11.71:6443: connect: connection refused" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:10.190383 containerd[1692]: time="2025-12-12T17:26:10.190208690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-e-d121438740,Uid:46ffb5544025851dfbe41cd9f386496d,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:10.201639 containerd[1692]: time="2025-12-12T17:26:10.201596668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-e-d121438740,Uid:a1e59e071b4bcd3dbfa392dc661cd89f,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:10.204461 containerd[1692]: time="2025-12-12T17:26:10.204436122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-e-d121438740,Uid:d7cd95789eb9cf58ea82adec21e4106b,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:10.343669 kubelet[2516]: E1212 17:26:10.343621 2516 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.11.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-d121438740?timeout=10s\": dial tcp 10.0.11.71:6443: connect: connection refused" interval="800ms" Dec 12 17:26:10.519677 kubelet[2516]: I1212 17:26:10.519426 2516 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:10.519800 kubelet[2516]: E1212 17:26:10.519742 2516 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.11.71:6443/api/v1/nodes\": dial tcp 10.0.11.71:6443: connect: connection refused" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:10.718551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2191831440.mount: Deactivated successfully. Dec 12 17:26:10.724944 containerd[1692]: time="2025-12-12T17:26:10.724888008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:10.727903 containerd[1692]: time="2025-12-12T17:26:10.727795623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=886" Dec 12 17:26:10.732327 containerd[1692]: time="2025-12-12T17:26:10.732293486Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:10.733421 containerd[1692]: time="2025-12-12T17:26:10.733390572Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:10.736229 containerd[1692]: time="2025-12-12T17:26:10.736180346Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:10.738018 containerd[1692]: time="2025-12-12T17:26:10.737989155Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:10.739540 containerd[1692]: time="2025-12-12T17:26:10.739508643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 12 17:26:10.740040 containerd[1692]: time="2025-12-12T17:26:10.740017365Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 531.05118ms" Dec 12 17:26:10.741546 containerd[1692]: time="2025-12-12T17:26:10.741505333Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Dec 12 17:26:10.746056 containerd[1692]: time="2025-12-12T17:26:10.745679474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 551.907406ms" Dec 12 17:26:10.748611 containerd[1692]: time="2025-12-12T17:26:10.748584489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 544.434328ms" Dec 12 17:26:10.760576 containerd[1692]: time="2025-12-12T17:26:10.760521470Z" level=info msg="connecting to shim 77a3f34f4d268fbb4bb7b75e12ef204adf134788739eb806bd42c69ba0c547e1" address="unix:///run/containerd/s/00b9439b6220083c54121856b75f7b58f43338416fab50f0fca1bbcc9f93ab92" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:10.780742 containerd[1692]: time="2025-12-12T17:26:10.780644492Z" level=info msg="connecting to shim f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2" address="unix:///run/containerd/s/40436e866d8bb7b4095aac010df10642d4c1c1a7e8dfaee2ccaa866d7b43c82b" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:10.790108 systemd[1]: Started cri-containerd-77a3f34f4d268fbb4bb7b75e12ef204adf134788739eb806bd42c69ba0c547e1.scope - libcontainer container 77a3f34f4d268fbb4bb7b75e12ef204adf134788739eb806bd42c69ba0c547e1. Dec 12 17:26:10.800375 containerd[1692]: time="2025-12-12T17:26:10.800330632Z" level=info msg="connecting to shim cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4" address="unix:///run/containerd/s/0d51fb46d4ac48db7714c41a359da71a02e1cba2bc75d1ee8222e55cc445e105" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:10.809329 systemd[1]: Started cri-containerd-f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2.scope - libcontainer container f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2. Dec 12 17:26:10.809000 audit: BPF prog-id=83 op=LOAD Dec 12 17:26:10.810000 audit: BPF prog-id=84 op=LOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.810000 audit: BPF prog-id=84 op=UNLOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.810000 audit: BPF prog-id=85 op=LOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.810000 audit: BPF prog-id=86 op=LOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.810000 audit: BPF prog-id=86 op=UNLOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.810000 audit: BPF prog-id=85 op=UNLOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.810000 audit: BPF prog-id=87 op=LOAD Dec 12 17:26:10.810000 audit[2576]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2565 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3737613366333466346432363866626234626237623735653132656632 Dec 12 17:26:10.825646 systemd[1]: Started cri-containerd-cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4.scope - libcontainer container cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4. Dec 12 17:26:10.827000 audit: BPF prog-id=88 op=LOAD Dec 12 17:26:10.827000 audit: BPF prog-id=89 op=LOAD Dec 12 17:26:10.827000 audit[2608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.827000 audit: BPF prog-id=89 op=UNLOAD Dec 12 17:26:10.827000 audit[2608]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.827000 audit: BPF prog-id=90 op=LOAD Dec 12 17:26:10.827000 audit[2608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.827000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.828000 audit: BPF prog-id=91 op=LOAD Dec 12 17:26:10.828000 audit[2608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.828000 audit: BPF prog-id=91 op=UNLOAD Dec 12 17:26:10.828000 audit[2608]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.828000 audit: BPF prog-id=90 op=UNLOAD Dec 12 17:26:10.828000 audit[2608]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.828000 audit: BPF prog-id=92 op=LOAD Dec 12 17:26:10.828000 audit[2608]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2597 pid=2608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.828000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639396163306564343133623236313836386465323064313030323637 Dec 12 17:26:10.844069 containerd[1692]: time="2025-12-12T17:26:10.844028815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-e-d121438740,Uid:d7cd95789eb9cf58ea82adec21e4106b,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a3f34f4d268fbb4bb7b75e12ef204adf134788739eb806bd42c69ba0c547e1\"" Dec 12 17:26:10.848000 audit: BPF prog-id=93 op=LOAD Dec 12 17:26:10.851619 containerd[1692]: time="2025-12-12T17:26:10.851583093Z" level=info msg="CreateContainer within sandbox \"77a3f34f4d268fbb4bb7b75e12ef204adf134788739eb806bd42c69ba0c547e1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 12 17:26:10.852000 audit: BPF prog-id=94 op=LOAD Dec 12 17:26:10.852000 audit[2647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.852000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.853000 audit: BPF prog-id=94 op=UNLOAD Dec 12 17:26:10.853000 audit[2647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.854000 audit: BPF prog-id=95 op=LOAD Dec 12 17:26:10.854000 audit[2647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.854000 audit: BPF prog-id=96 op=LOAD Dec 12 17:26:10.854000 audit[2647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.854000 audit: BPF prog-id=96 op=UNLOAD Dec 12 17:26:10.854000 audit[2647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.854000 audit: BPF prog-id=95 op=UNLOAD Dec 12 17:26:10.854000 audit[2647]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.854000 audit: BPF prog-id=97 op=LOAD Dec 12 17:26:10.854000 audit[2647]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2626 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.854000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362346333333064336436363230613532383765303635663461386162 Dec 12 17:26:10.860558 containerd[1692]: time="2025-12-12T17:26:10.860501378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-e-d121438740,Uid:46ffb5544025851dfbe41cd9f386496d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2\"" Dec 12 17:26:10.864738 containerd[1692]: time="2025-12-12T17:26:10.864704400Z" level=info msg="Container 7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:10.868137 containerd[1692]: time="2025-12-12T17:26:10.868054697Z" level=info msg="CreateContainer within sandbox \"f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 12 17:26:10.874970 containerd[1692]: time="2025-12-12T17:26:10.874912132Z" level=info msg="CreateContainer within sandbox \"77a3f34f4d268fbb4bb7b75e12ef204adf134788739eb806bd42c69ba0c547e1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620\"" Dec 12 17:26:10.876129 containerd[1692]: time="2025-12-12T17:26:10.875626135Z" level=info msg="StartContainer for \"7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620\"" Dec 12 17:26:10.877810 containerd[1692]: time="2025-12-12T17:26:10.877743706Z" level=info msg="connecting to shim 7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620" address="unix:///run/containerd/s/00b9439b6220083c54121856b75f7b58f43338416fab50f0fca1bbcc9f93ab92" protocol=ttrpc version=3 Dec 12 17:26:10.883818 containerd[1692]: time="2025-12-12T17:26:10.883773457Z" level=info msg="Container 4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:10.884239 containerd[1692]: time="2025-12-12T17:26:10.884208659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-e-d121438740,Uid:a1e59e071b4bcd3dbfa392dc661cd89f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4\"" Dec 12 17:26:10.889725 containerd[1692]: time="2025-12-12T17:26:10.889669567Z" level=info msg="CreateContainer within sandbox \"cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 12 17:26:10.895821 containerd[1692]: time="2025-12-12T17:26:10.895776998Z" level=info msg="CreateContainer within sandbox \"f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa\"" Dec 12 17:26:10.896302 containerd[1692]: time="2025-12-12T17:26:10.896277040Z" level=info msg="StartContainer for \"4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa\"" Dec 12 17:26:10.897374 containerd[1692]: time="2025-12-12T17:26:10.897341366Z" level=info msg="connecting to shim 4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa" address="unix:///run/containerd/s/40436e866d8bb7b4095aac010df10642d4c1c1a7e8dfaee2ccaa866d7b43c82b" protocol=ttrpc version=3 Dec 12 17:26:10.900892 containerd[1692]: time="2025-12-12T17:26:10.900246260Z" level=info msg="Container c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:10.900361 systemd[1]: Started cri-containerd-7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620.scope - libcontainer container 7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620. Dec 12 17:26:10.910677 containerd[1692]: time="2025-12-12T17:26:10.910628073Z" level=info msg="CreateContainer within sandbox \"cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8\"" Dec 12 17:26:10.911472 containerd[1692]: time="2025-12-12T17:26:10.911444437Z" level=info msg="StartContainer for \"c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8\"" Dec 12 17:26:10.913667 kubelet[2516]: E1212 17:26:10.913635 2516 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.11.71:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.11.71:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 12 17:26:10.914026 containerd[1692]: time="2025-12-12T17:26:10.914001970Z" level=info msg="connecting to shim c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8" address="unix:///run/containerd/s/0d51fb46d4ac48db7714c41a359da71a02e1cba2bc75d1ee8222e55cc445e105" protocol=ttrpc version=3 Dec 12 17:26:10.918337 systemd[1]: Started cri-containerd-4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa.scope - libcontainer container 4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa. Dec 12 17:26:10.921000 audit: BPF prog-id=98 op=LOAD Dec 12 17:26:10.922000 audit: BPF prog-id=99 op=LOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.922000 audit: BPF prog-id=99 op=UNLOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.922000 audit: BPF prog-id=100 op=LOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.922000 audit: BPF prog-id=101 op=LOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.922000 audit: BPF prog-id=101 op=UNLOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.922000 audit: BPF prog-id=100 op=UNLOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.922000 audit: BPF prog-id=102 op=LOAD Dec 12 17:26:10.922000 audit[2690]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2565 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.922000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734353436323665336165653363623963363135613834366631393438 Dec 12 17:26:10.936482 systemd[1]: Started cri-containerd-c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8.scope - libcontainer container c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8. Dec 12 17:26:10.936000 audit: BPF prog-id=103 op=LOAD Dec 12 17:26:10.937000 audit: BPF prog-id=104 op=LOAD Dec 12 17:26:10.937000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.937000 audit: BPF prog-id=104 op=UNLOAD Dec 12 17:26:10.937000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.938000 audit: BPF prog-id=105 op=LOAD Dec 12 17:26:10.938000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.938000 audit: BPF prog-id=106 op=LOAD Dec 12 17:26:10.938000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.938000 audit: BPF prog-id=106 op=UNLOAD Dec 12 17:26:10.938000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.938000 audit: BPF prog-id=105 op=UNLOAD Dec 12 17:26:10.938000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.938000 audit: BPF prog-id=107 op=LOAD Dec 12 17:26:10.938000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2597 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463623666333034613963386438333966633966306465313966376137 Dec 12 17:26:10.953544 containerd[1692]: time="2025-12-12T17:26:10.953339931Z" level=info msg="StartContainer for \"7454626e3aee3cb9c615a846f194860102fa730fe99c5631d56d71565a807620\" returns successfully" Dec 12 17:26:10.954000 audit: BPF prog-id=108 op=LOAD Dec 12 17:26:10.954000 audit: BPF prog-id=109 op=LOAD Dec 12 17:26:10.954000 audit[2724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.955000 audit: BPF prog-id=109 op=UNLOAD Dec 12 17:26:10.955000 audit[2724]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.955000 audit: BPF prog-id=110 op=LOAD Dec 12 17:26:10.955000 audit[2724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.955000 audit: BPF prog-id=111 op=LOAD Dec 12 17:26:10.955000 audit[2724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.955000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.956000 audit: BPF prog-id=111 op=UNLOAD Dec 12 17:26:10.956000 audit[2724]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.956000 audit: BPF prog-id=110 op=UNLOAD Dec 12 17:26:10.956000 audit[2724]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.956000 audit: BPF prog-id=112 op=LOAD Dec 12 17:26:10.956000 audit[2724]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2626 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:10.956000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339343161333336643630333862656635383332353862633937643164 Dec 12 17:26:10.974420 containerd[1692]: time="2025-12-12T17:26:10.974383718Z" level=info msg="StartContainer for \"4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa\" returns successfully" Dec 12 17:26:10.991606 containerd[1692]: time="2025-12-12T17:26:10.991354004Z" level=info msg="StartContainer for \"c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8\" returns successfully" Dec 12 17:26:11.321876 kubelet[2516]: I1212 17:26:11.321831 2516 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:11.776668 kubelet[2516]: E1212 17:26:11.776330 2516 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:11.776766 kubelet[2516]: E1212 17:26:11.776442 2516 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:11.778984 kubelet[2516]: E1212 17:26:11.778956 2516 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:12.110037 kubelet[2516]: E1212 17:26:12.109723 2516 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-e-d121438740\" not found" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:12.178459 kubelet[2516]: I1212 17:26:12.178404 2516 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:12.241793 kubelet[2516]: I1212 17:26:12.241756 2516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.247228 kubelet[2516]: E1212 17:26:12.247196 2516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-e-d121438740\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.247412 kubelet[2516]: I1212 17:26:12.247340 2516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.249153 kubelet[2516]: E1212 17:26:12.249128 2516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-e-d121438740\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.249153 kubelet[2516]: I1212 17:26:12.249153 2516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.250589 kubelet[2516]: E1212 17:26:12.250562 2516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-e-d121438740\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.728607 kubelet[2516]: I1212 17:26:12.728575 2516 apiserver.go:52] "Watching apiserver" Dec 12 17:26:12.741245 kubelet[2516]: I1212 17:26:12.741212 2516 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:26:12.779375 kubelet[2516]: I1212 17:26:12.779351 2516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.782117 kubelet[2516]: I1212 17:26:12.781172 2516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.782117 kubelet[2516]: E1212 17:26:12.781417 2516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-e-d121438740\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:12.783060 kubelet[2516]: E1212 17:26:12.783032 2516 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-e-d121438740\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:14.395813 systemd[1]: Reload requested from client PID 2801 ('systemctl') (unit session-7.scope)... Dec 12 17:26:14.395829 systemd[1]: Reloading... Dec 12 17:26:14.408895 kubelet[2516]: I1212 17:26:14.408867 2516 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:14.463209 zram_generator::config[2847]: No configuration found. Dec 12 17:26:14.644095 systemd[1]: Reloading finished in 247 ms. Dec 12 17:26:14.673511 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:14.697068 systemd[1]: kubelet.service: Deactivated successfully. Dec 12 17:26:14.698206 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:14.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.698280 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 124.7M memory peak. Dec 12 17:26:14.700287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 12 17:26:14.700476 kernel: kauditd_printk_skb: 200 callbacks suppressed Dec 12 17:26:14.700522 kernel: audit: type=1131 audit(1765560374.697:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.701000 audit: BPF prog-id=113 op=LOAD Dec 12 17:26:14.701000 audit: BPF prog-id=63 op=UNLOAD Dec 12 17:26:14.704812 kernel: audit: type=1334 audit(1765560374.701:397): prog-id=113 op=LOAD Dec 12 17:26:14.704865 kernel: audit: type=1334 audit(1765560374.701:398): prog-id=63 op=UNLOAD Dec 12 17:26:14.704883 kernel: audit: type=1334 audit(1765560374.701:399): prog-id=114 op=LOAD Dec 12 17:26:14.701000 audit: BPF prog-id=114 op=LOAD Dec 12 17:26:14.701000 audit: BPF prog-id=115 op=LOAD Dec 12 17:26:14.706352 kernel: audit: type=1334 audit(1765560374.701:400): prog-id=115 op=LOAD Dec 12 17:26:14.706397 kernel: audit: type=1334 audit(1765560374.701:401): prog-id=64 op=UNLOAD Dec 12 17:26:14.701000 audit: BPF prog-id=64 op=UNLOAD Dec 12 17:26:14.701000 audit: BPF prog-id=65 op=UNLOAD Dec 12 17:26:14.707865 kernel: audit: type=1334 audit(1765560374.701:402): prog-id=65 op=UNLOAD Dec 12 17:26:14.707905 kernel: audit: type=1334 audit(1765560374.702:403): prog-id=116 op=LOAD Dec 12 17:26:14.702000 audit: BPF prog-id=116 op=LOAD Dec 12 17:26:14.708628 kernel: audit: type=1334 audit(1765560374.702:404): prog-id=117 op=LOAD Dec 12 17:26:14.702000 audit: BPF prog-id=117 op=LOAD Dec 12 17:26:14.709366 kernel: audit: type=1334 audit(1765560374.702:405): prog-id=79 op=UNLOAD Dec 12 17:26:14.702000 audit: BPF prog-id=79 op=UNLOAD Dec 12 17:26:14.702000 audit: BPF prog-id=80 op=UNLOAD Dec 12 17:26:14.705000 audit: BPF prog-id=118 op=LOAD Dec 12 17:26:14.718000 audit: BPF prog-id=75 op=UNLOAD Dec 12 17:26:14.718000 audit: BPF prog-id=119 op=LOAD Dec 12 17:26:14.718000 audit: BPF prog-id=120 op=LOAD Dec 12 17:26:14.718000 audit: BPF prog-id=76 op=UNLOAD Dec 12 17:26:14.718000 audit: BPF prog-id=77 op=UNLOAD Dec 12 17:26:14.718000 audit: BPF prog-id=121 op=LOAD Dec 12 17:26:14.718000 audit: BPF prog-id=82 op=UNLOAD Dec 12 17:26:14.720000 audit: BPF prog-id=122 op=LOAD Dec 12 17:26:14.720000 audit: BPF prog-id=81 op=UNLOAD Dec 12 17:26:14.721000 audit: BPF prog-id=123 op=LOAD Dec 12 17:26:14.721000 audit: BPF prog-id=69 op=UNLOAD Dec 12 17:26:14.721000 audit: BPF prog-id=124 op=LOAD Dec 12 17:26:14.721000 audit: BPF prog-id=125 op=LOAD Dec 12 17:26:14.721000 audit: BPF prog-id=70 op=UNLOAD Dec 12 17:26:14.721000 audit: BPF prog-id=71 op=UNLOAD Dec 12 17:26:14.721000 audit: BPF prog-id=126 op=LOAD Dec 12 17:26:14.721000 audit: BPF prog-id=66 op=UNLOAD Dec 12 17:26:14.721000 audit: BPF prog-id=127 op=LOAD Dec 12 17:26:14.721000 audit: BPF prog-id=128 op=LOAD Dec 12 17:26:14.721000 audit: BPF prog-id=67 op=UNLOAD Dec 12 17:26:14.721000 audit: BPF prog-id=68 op=UNLOAD Dec 12 17:26:14.722000 audit: BPF prog-id=129 op=LOAD Dec 12 17:26:14.722000 audit: BPF prog-id=78 op=UNLOAD Dec 12 17:26:14.722000 audit: BPF prog-id=130 op=LOAD Dec 12 17:26:14.722000 audit: BPF prog-id=72 op=UNLOAD Dec 12 17:26:14.722000 audit: BPF prog-id=131 op=LOAD Dec 12 17:26:14.722000 audit: BPF prog-id=132 op=LOAD Dec 12 17:26:14.722000 audit: BPF prog-id=73 op=UNLOAD Dec 12 17:26:14.722000 audit: BPF prog-id=74 op=UNLOAD Dec 12 17:26:14.868018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 12 17:26:14.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:14.872746 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 12 17:26:14.915719 kubelet[2892]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 12 17:26:14.915719 kubelet[2892]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 12 17:26:14.916042 kubelet[2892]: I1212 17:26:14.915770 2892 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 12 17:26:14.923177 kubelet[2892]: I1212 17:26:14.923134 2892 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Dec 12 17:26:14.923177 kubelet[2892]: I1212 17:26:14.923166 2892 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 12 17:26:14.923297 kubelet[2892]: I1212 17:26:14.923197 2892 watchdog_linux.go:95] "Systemd watchdog is not enabled" Dec 12 17:26:14.923297 kubelet[2892]: I1212 17:26:14.923204 2892 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 12 17:26:14.923418 kubelet[2892]: I1212 17:26:14.923403 2892 server.go:956] "Client rotation is on, will bootstrap in background" Dec 12 17:26:14.924629 kubelet[2892]: I1212 17:26:14.924561 2892 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 12 17:26:14.927328 kubelet[2892]: I1212 17:26:14.927308 2892 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 12 17:26:14.929757 kubelet[2892]: I1212 17:26:14.929738 2892 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 12 17:26:14.932143 kubelet[2892]: I1212 17:26:14.932076 2892 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Dec 12 17:26:14.932329 kubelet[2892]: I1212 17:26:14.932302 2892 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 12 17:26:14.932602 kubelet[2892]: I1212 17:26:14.932327 2892 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-e-d121438740","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 12 17:26:14.932602 kubelet[2892]: I1212 17:26:14.932578 2892 topology_manager.go:138] "Creating topology manager with none policy" Dec 12 17:26:14.932602 kubelet[2892]: I1212 17:26:14.932607 2892 container_manager_linux.go:306] "Creating device plugin manager" Dec 12 17:26:14.932800 kubelet[2892]: I1212 17:26:14.932679 2892 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Dec 12 17:26:14.933568 kubelet[2892]: I1212 17:26:14.933552 2892 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:14.933721 kubelet[2892]: I1212 17:26:14.933710 2892 kubelet.go:475] "Attempting to sync node with API server" Dec 12 17:26:14.933753 kubelet[2892]: I1212 17:26:14.933729 2892 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 12 17:26:14.933776 kubelet[2892]: I1212 17:26:14.933754 2892 kubelet.go:387] "Adding apiserver pod source" Dec 12 17:26:14.933776 kubelet[2892]: I1212 17:26:14.933767 2892 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 12 17:26:14.934892 kubelet[2892]: I1212 17:26:14.934871 2892 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Dec 12 17:26:14.935441 kubelet[2892]: I1212 17:26:14.935421 2892 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 12 17:26:14.935468 kubelet[2892]: I1212 17:26:14.935455 2892 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Dec 12 17:26:14.938252 kubelet[2892]: I1212 17:26:14.938223 2892 server.go:1262] "Started kubelet" Dec 12 17:26:14.940328 kubelet[2892]: I1212 17:26:14.940273 2892 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 12 17:26:14.940396 kubelet[2892]: I1212 17:26:14.940342 2892 server_v1.go:49] "podresources" method="list" useActivePods=true Dec 12 17:26:14.940561 kubelet[2892]: I1212 17:26:14.940541 2892 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 12 17:26:14.940618 kubelet[2892]: I1212 17:26:14.940600 2892 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 12 17:26:14.941632 kubelet[2892]: I1212 17:26:14.941590 2892 server.go:310] "Adding debug handlers to kubelet server" Dec 12 17:26:14.942912 kubelet[2892]: I1212 17:26:14.942877 2892 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 12 17:26:14.947336 kubelet[2892]: I1212 17:26:14.947290 2892 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 12 17:26:14.947831 kubelet[2892]: I1212 17:26:14.947698 2892 volume_manager.go:313] "Starting Kubelet Volume Manager" Dec 12 17:26:14.947831 kubelet[2892]: E1212 17:26:14.947814 2892 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4515-1-0-e-d121438740\" not found" Dec 12 17:26:14.950498 kubelet[2892]: I1212 17:26:14.950468 2892 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 12 17:26:14.954331 kubelet[2892]: I1212 17:26:14.954300 2892 reconciler.go:29] "Reconciler: start to sync state" Dec 12 17:26:14.954649 kubelet[2892]: I1212 17:26:14.954624 2892 factory.go:223] Registration of the systemd container factory successfully Dec 12 17:26:14.954732 kubelet[2892]: I1212 17:26:14.954712 2892 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 12 17:26:14.958131 kubelet[2892]: E1212 17:26:14.958084 2892 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 12 17:26:14.958372 kubelet[2892]: I1212 17:26:14.958241 2892 factory.go:223] Registration of the containerd container factory successfully Dec 12 17:26:14.966832 kubelet[2892]: I1212 17:26:14.966511 2892 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Dec 12 17:26:14.967425 kubelet[2892]: I1212 17:26:14.967408 2892 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Dec 12 17:26:14.967454 kubelet[2892]: I1212 17:26:14.967427 2892 status_manager.go:244] "Starting to sync pod status with apiserver" Dec 12 17:26:14.967502 kubelet[2892]: I1212 17:26:14.967457 2892 kubelet.go:2427] "Starting kubelet main sync loop" Dec 12 17:26:14.967526 kubelet[2892]: E1212 17:26:14.967493 2892 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 12 17:26:15.003056 kubelet[2892]: I1212 17:26:15.003020 2892 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 12 17:26:15.003056 kubelet[2892]: I1212 17:26:15.003043 2892 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 12 17:26:15.003056 kubelet[2892]: I1212 17:26:15.003065 2892 state_mem.go:36] "Initialized new in-memory state store" Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003395 2892 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003418 2892 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003439 2892 policy_none.go:49] "None policy: Start" Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003447 2892 memory_manager.go:187] "Starting memorymanager" policy="None" Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003459 2892 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003559 2892 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Dec 12 17:26:15.004147 kubelet[2892]: I1212 17:26:15.003567 2892 policy_none.go:47] "Start" Dec 12 17:26:15.007292 kubelet[2892]: E1212 17:26:15.007190 2892 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 12 17:26:15.008068 kubelet[2892]: I1212 17:26:15.008043 2892 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 12 17:26:15.008125 kubelet[2892]: I1212 17:26:15.008067 2892 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 12 17:26:15.008385 kubelet[2892]: I1212 17:26:15.008364 2892 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 12 17:26:15.010305 kubelet[2892]: E1212 17:26:15.010257 2892 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 12 17:26:15.068856 kubelet[2892]: I1212 17:26:15.068812 2892 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.070230 kubelet[2892]: I1212 17:26:15.070189 2892 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.070435 kubelet[2892]: I1212 17:26:15.070322 2892 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.077694 kubelet[2892]: E1212 17:26:15.077640 2892 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-e-d121438740\" already exists" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.111743 kubelet[2892]: I1212 17:26:15.111700 2892 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:15.121734 kubelet[2892]: I1212 17:26:15.121700 2892 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:15.121841 kubelet[2892]: I1212 17:26:15.121789 2892 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156233 kubelet[2892]: I1212 17:26:15.156171 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1e59e071b4bcd3dbfa392dc661cd89f-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-e-d121438740\" (UID: \"a1e59e071b4bcd3dbfa392dc661cd89f\") " pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156233 kubelet[2892]: I1212 17:26:15.156211 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd95789eb9cf58ea82adec21e4106b-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-e-d121438740\" (UID: \"d7cd95789eb9cf58ea82adec21e4106b\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156233 kubelet[2892]: I1212 17:26:15.156228 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d7cd95789eb9cf58ea82adec21e4106b-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-e-d121438740\" (UID: \"d7cd95789eb9cf58ea82adec21e4106b\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156233 kubelet[2892]: I1212 17:26:15.156246 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156437 kubelet[2892]: I1212 17:26:15.156262 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156437 kubelet[2892]: I1212 17:26:15.156279 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156437 kubelet[2892]: I1212 17:26:15.156296 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156437 kubelet[2892]: I1212 17:26:15.156347 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d7cd95789eb9cf58ea82adec21e4106b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-e-d121438740\" (UID: \"d7cd95789eb9cf58ea82adec21e4106b\") " pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.156437 kubelet[2892]: I1212 17:26:15.156371 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/46ffb5544025851dfbe41cd9f386496d-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-e-d121438740\" (UID: \"46ffb5544025851dfbe41cd9f386496d\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" Dec 12 17:26:15.934715 kubelet[2892]: I1212 17:26:15.934631 2892 apiserver.go:52] "Watching apiserver" Dec 12 17:26:15.951425 kubelet[2892]: I1212 17:26:15.951360 2892 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 12 17:26:15.992215 kubelet[2892]: I1212 17:26:15.991496 2892 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:16.001066 kubelet[2892]: E1212 17:26:16.000901 2892 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-e-d121438740\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" Dec 12 17:26:16.014920 kubelet[2892]: I1212 17:26:16.014708 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-e-d121438740" podStartSLOduration=2.014694556 podStartE2EDuration="2.014694556s" podCreationTimestamp="2025-12-12 17:26:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:16.014668956 +0000 UTC m=+1.138722274" watchObservedRunningTime="2025-12-12 17:26:16.014694556 +0000 UTC m=+1.138747834" Dec 12 17:26:16.032352 kubelet[2892]: I1212 17:26:16.032283 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-e-d121438740" podStartSLOduration=1.032168005 podStartE2EDuration="1.032168005s" podCreationTimestamp="2025-12-12 17:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:16.032169285 +0000 UTC m=+1.156222603" watchObservedRunningTime="2025-12-12 17:26:16.032168005 +0000 UTC m=+1.156221283" Dec 12 17:26:16.032682 kubelet[2892]: I1212 17:26:16.032473 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-e-d121438740" podStartSLOduration=1.032466406 podStartE2EDuration="1.032466406s" podCreationTimestamp="2025-12-12 17:26:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:16.024066683 +0000 UTC m=+1.148119961" watchObservedRunningTime="2025-12-12 17:26:16.032466406 +0000 UTC m=+1.156519724" Dec 12 17:26:16.382058 update_engine[1667]: I20251212 17:26:16.381557 1667 update_attempter.cc:509] Updating boot flags... Dec 12 17:26:20.576254 kubelet[2892]: I1212 17:26:20.576200 2892 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 12 17:26:20.576874 kubelet[2892]: I1212 17:26:20.576659 2892 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 12 17:26:20.576924 containerd[1692]: time="2025-12-12T17:26:20.576495504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 12 17:26:21.300822 systemd[1]: Created slice kubepods-besteffort-podff94cc6f_653e_4b15_9b10_44c8133d70d4.slice - libcontainer container kubepods-besteffort-podff94cc6f_653e_4b15_9b10_44c8133d70d4.slice. Dec 12 17:26:21.393523 kubelet[2892]: I1212 17:26:21.393443 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff94cc6f-653e-4b15-9b10-44c8133d70d4-xtables-lock\") pod \"kube-proxy-vgfn7\" (UID: \"ff94cc6f-653e-4b15-9b10-44c8133d70d4\") " pod="kube-system/kube-proxy-vgfn7" Dec 12 17:26:21.393523 kubelet[2892]: I1212 17:26:21.393488 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff94cc6f-653e-4b15-9b10-44c8133d70d4-lib-modules\") pod \"kube-proxy-vgfn7\" (UID: \"ff94cc6f-653e-4b15-9b10-44c8133d70d4\") " pod="kube-system/kube-proxy-vgfn7" Dec 12 17:26:21.393523 kubelet[2892]: I1212 17:26:21.393508 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547c6\" (UniqueName: \"kubernetes.io/projected/ff94cc6f-653e-4b15-9b10-44c8133d70d4-kube-api-access-547c6\") pod \"kube-proxy-vgfn7\" (UID: \"ff94cc6f-653e-4b15-9b10-44c8133d70d4\") " pod="kube-system/kube-proxy-vgfn7" Dec 12 17:26:21.393523 kubelet[2892]: I1212 17:26:21.393524 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff94cc6f-653e-4b15-9b10-44c8133d70d4-kube-proxy\") pod \"kube-proxy-vgfn7\" (UID: \"ff94cc6f-653e-4b15-9b10-44c8133d70d4\") " pod="kube-system/kube-proxy-vgfn7" Dec 12 17:26:21.613772 containerd[1692]: time="2025-12-12T17:26:21.613655175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgfn7,Uid:ff94cc6f-653e-4b15-9b10-44c8133d70d4,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:21.632638 containerd[1692]: time="2025-12-12T17:26:21.632593911Z" level=info msg="connecting to shim 18dd0742d3b8235e49bab7e50250d5fcd93538659c0a8faa9354b5c8fd55c18c" address="unix:///run/containerd/s/3577a8df3116134232912cc8e21217ed7eae2c223e4493c884701287f34bacaa" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:21.659323 systemd[1]: Started cri-containerd-18dd0742d3b8235e49bab7e50250d5fcd93538659c0a8faa9354b5c8fd55c18c.scope - libcontainer container 18dd0742d3b8235e49bab7e50250d5fcd93538659c0a8faa9354b5c8fd55c18c. Dec 12 17:26:21.675000 audit: BPF prog-id=133 op=LOAD Dec 12 17:26:21.678062 kernel: kauditd_printk_skb: 32 callbacks suppressed Dec 12 17:26:21.678229 kernel: audit: type=1334 audit(1765560381.675:438): prog-id=133 op=LOAD Dec 12 17:26:21.676000 audit: BPF prog-id=134 op=LOAD Dec 12 17:26:21.676000 audit[2981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.682385 kernel: audit: type=1334 audit(1765560381.676:439): prog-id=134 op=LOAD Dec 12 17:26:21.682453 kernel: audit: type=1300 audit(1765560381.676:439): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.682476 kernel: audit: type=1327 audit(1765560381.676:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.676000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.677000 audit: BPF prog-id=134 op=UNLOAD Dec 12 17:26:21.686500 kernel: audit: type=1334 audit(1765560381.677:440): prog-id=134 op=UNLOAD Dec 12 17:26:21.677000 audit[2981]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.689708 kernel: audit: type=1300 audit(1765560381.677:440): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.689771 kernel: audit: type=1327 audit(1765560381.677:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.677000 audit: BPF prog-id=135 op=LOAD Dec 12 17:26:21.693836 kernel: audit: type=1334 audit(1765560381.677:441): prog-id=135 op=LOAD Dec 12 17:26:21.693890 kernel: audit: type=1300 audit(1765560381.677:441): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.677000 audit[2981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.677000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.700348 kernel: audit: type=1327 audit(1765560381.677:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.678000 audit: BPF prog-id=136 op=LOAD Dec 12 17:26:21.678000 audit[2981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.678000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.684000 audit: BPF prog-id=136 op=UNLOAD Dec 12 17:26:21.684000 audit[2981]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.684000 audit: BPF prog-id=135 op=UNLOAD Dec 12 17:26:21.684000 audit[2981]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.684000 audit: BPF prog-id=137 op=LOAD Dec 12 17:26:21.684000 audit[2981]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2970 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138646430373432643362383233356534396261623765353032353064 Dec 12 17:26:21.713493 containerd[1692]: time="2025-12-12T17:26:21.713450042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vgfn7,Uid:ff94cc6f-653e-4b15-9b10-44c8133d70d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"18dd0742d3b8235e49bab7e50250d5fcd93538659c0a8faa9354b5c8fd55c18c\"" Dec 12 17:26:21.719678 containerd[1692]: time="2025-12-12T17:26:21.719643793Z" level=info msg="CreateContainer within sandbox \"18dd0742d3b8235e49bab7e50250d5fcd93538659c0a8faa9354b5c8fd55c18c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 12 17:26:21.733910 containerd[1692]: time="2025-12-12T17:26:21.733628624Z" level=info msg="Container ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:21.737010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567454635.mount: Deactivated successfully. Dec 12 17:26:21.752869 containerd[1692]: time="2025-12-12T17:26:21.752801162Z" level=info msg="CreateContainer within sandbox \"18dd0742d3b8235e49bab7e50250d5fcd93538659c0a8faa9354b5c8fd55c18c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3\"" Dec 12 17:26:21.753458 containerd[1692]: time="2025-12-12T17:26:21.753431885Z" level=info msg="StartContainer for \"ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3\"" Dec 12 17:26:21.754855 containerd[1692]: time="2025-12-12T17:26:21.754830652Z" level=info msg="connecting to shim ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3" address="unix:///run/containerd/s/3577a8df3116134232912cc8e21217ed7eae2c223e4493c884701287f34bacaa" protocol=ttrpc version=3 Dec 12 17:26:21.775321 systemd[1]: Started cri-containerd-ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3.scope - libcontainer container ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3. Dec 12 17:26:21.786035 systemd[1]: Created slice kubepods-besteffort-podc898d5f5_e4e0_42f3_9833_780383a8e871.slice - libcontainer container kubepods-besteffort-podc898d5f5_e4e0_42f3_9833_780383a8e871.slice. Dec 12 17:26:21.796301 kubelet[2892]: I1212 17:26:21.796220 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c898d5f5-e4e0-42f3-9833-780383a8e871-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-2fr8f\" (UID: \"c898d5f5-e4e0-42f3-9833-780383a8e871\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2fr8f" Dec 12 17:26:21.796622 kubelet[2892]: I1212 17:26:21.796361 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdhcr\" (UniqueName: \"kubernetes.io/projected/c898d5f5-e4e0-42f3-9833-780383a8e871-kube-api-access-rdhcr\") pod \"tigera-operator-65cdcdfd6d-2fr8f\" (UID: \"c898d5f5-e4e0-42f3-9833-780383a8e871\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-2fr8f" Dec 12 17:26:21.838000 audit: BPF prog-id=138 op=LOAD Dec 12 17:26:21.838000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2970 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393561366631633262323266623836333537306134653239393833 Dec 12 17:26:21.838000 audit: BPF prog-id=139 op=LOAD Dec 12 17:26:21.838000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2970 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393561366631633262323266623836333537306134653239393833 Dec 12 17:26:21.839000 audit: BPF prog-id=139 op=UNLOAD Dec 12 17:26:21.839000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393561366631633262323266623836333537306134653239393833 Dec 12 17:26:21.839000 audit: BPF prog-id=138 op=UNLOAD Dec 12 17:26:21.839000 audit[3008]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2970 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393561366631633262323266623836333537306134653239393833 Dec 12 17:26:21.839000 audit: BPF prog-id=140 op=LOAD Dec 12 17:26:21.839000 audit[3008]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2970 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:21.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6566393561366631633262323266623836333537306134653239393833 Dec 12 17:26:21.857606 containerd[1692]: time="2025-12-12T17:26:21.857568654Z" level=info msg="StartContainer for \"ef95a6f1c2b22fb863570a4e29983bdadf8f46c33fbf61a7fd121a68991c0ae3\" returns successfully" Dec 12 17:26:22.014510 kubelet[2892]: I1212 17:26:22.014455 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vgfn7" podStartSLOduration=1.014440011 podStartE2EDuration="1.014440011s" podCreationTimestamp="2025-12-12 17:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:26:22.013560847 +0000 UTC m=+7.137614125" watchObservedRunningTime="2025-12-12 17:26:22.014440011 +0000 UTC m=+7.138493289" Dec 12 17:26:22.089000 audit[3075]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3075 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.089000 audit[3075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd7bfe8e0 a2=0 a3=1 items=0 ppid=3021 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.089000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:26:22.090000 audit[3078]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.090000 audit[3078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb7f5e50 a2=0 a3=1 items=0 ppid=3021 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:26:22.090000 audit[3076]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3076 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.090000 audit[3076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4f364f0 a2=0 a3=1 items=0 ppid=3021 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 12 17:26:22.094812 containerd[1692]: time="2025-12-12T17:26:22.094773300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2fr8f,Uid:c898d5f5-e4e0-42f3-9833-780383a8e871,Namespace:tigera-operator,Attempt:0,}" Dec 12 17:26:22.094000 audit[3080]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.094000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3c83460 a2=0 a3=1 items=0 ppid=3021 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:26:22.095000 audit[3082]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3082 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.095000 audit[3082]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdccd4800 a2=0 a3=1 items=0 ppid=3021 pid=3082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 12 17:26:22.096000 audit[3083]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.096000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd9f780e0 a2=0 a3=1 items=0 ppid=3021 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.096000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 12 17:26:22.118945 containerd[1692]: time="2025-12-12T17:26:22.118896262Z" level=info msg="connecting to shim aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542" address="unix:///run/containerd/s/3872b6132fd966fe1dd98ee5081e3f8313aee7738cae95bc137a79b84d0370d4" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:22.149459 systemd[1]: Started cri-containerd-aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542.scope - libcontainer container aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542. Dec 12 17:26:22.157000 audit: BPF prog-id=141 op=LOAD Dec 12 17:26:22.158000 audit: BPF prog-id=142 op=LOAD Dec 12 17:26:22.158000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.158000 audit: BPF prog-id=142 op=UNLOAD Dec 12 17:26:22.158000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.158000 audit: BPF prog-id=143 op=LOAD Dec 12 17:26:22.158000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.159000 audit: BPF prog-id=144 op=LOAD Dec 12 17:26:22.159000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.159000 audit: BPF prog-id=144 op=UNLOAD Dec 12 17:26:22.159000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.159000 audit: BPF prog-id=143 op=UNLOAD Dec 12 17:26:22.159000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.159000 audit: BPF prog-id=145 op=LOAD Dec 12 17:26:22.159000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3092 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.159000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161383838643937313962613935633637663437616236303938353232 Dec 12 17:26:22.179748 containerd[1692]: time="2025-12-12T17:26:22.179713171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-2fr8f,Uid:c898d5f5-e4e0-42f3-9833-780383a8e871,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542\"" Dec 12 17:26:22.181029 containerd[1692]: time="2025-12-12T17:26:22.181005098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Dec 12 17:26:22.191000 audit[3129]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.191000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc671b380 a2=0 a3=1 items=0 ppid=3021 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.191000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:26:22.194000 audit[3131]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.194000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd9936160 a2=0 a3=1 items=0 ppid=3021 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.194000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73002D Dec 12 17:26:22.197000 audit[3134]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.197000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc53dfa10 a2=0 a3=1 items=0 ppid=3021 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 17:26:22.198000 audit[3135]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.198000 audit[3135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe381f40 a2=0 a3=1 items=0 ppid=3021 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:26:22.200000 audit[3137]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.200000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcff25e50 a2=0 a3=1 items=0 ppid=3021 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.200000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:26:22.201000 audit[3138]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.201000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8fa9af0 a2=0 a3=1 items=0 ppid=3021 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.201000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:26:22.203000 audit[3140]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.203000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcfecebd0 a2=0 a3=1 items=0 ppid=3021 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.207000 audit[3143]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.207000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc5436b10 a2=0 a3=1 items=0 ppid=3021 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.208000 audit[3144]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.208000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd04b5450 a2=0 a3=1 items=0 ppid=3021 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:26:22.210000 audit[3146]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.210000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc3d50c50 a2=0 a3=1 items=0 ppid=3021 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:26:22.211000 audit[3147]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.211000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff27a24c0 a2=0 a3=1 items=0 ppid=3021 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:26:22.213000 audit[3149]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.213000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc859ac30 a2=0 a3=1 items=0 ppid=3021 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.213000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F5859 Dec 12 17:26:22.216000 audit[3152]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.216000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffe248e60 a2=0 a3=1 items=0 ppid=3021 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 17:26:22.219000 audit[3155]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.219000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4e64ec0 a2=0 a3=1 items=0 ppid=3021 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 17:26:22.220000 audit[3156]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.220000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc26e3860 a2=0 a3=1 items=0 ppid=3021 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:26:22.223000 audit[3158]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.223000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc86c8ba0 a2=0 a3=1 items=0 ppid=3021 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.226000 audit[3161]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.226000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffdde6fe0 a2=0 a3=1 items=0 ppid=3021 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.227000 audit[3162]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.227000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff08bbd40 a2=0 a3=1 items=0 ppid=3021 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.227000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:26:22.229000 audit[3164]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 12 17:26:22.229000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe22bf510 a2=0 a3=1 items=0 ppid=3021 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:26:22.252000 audit[3170]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:22.252000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffefe0b9e0 a2=0 a3=1 items=0 ppid=3021 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:22.261000 audit[3170]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:22.261000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffefe0b9e0 a2=0 a3=1 items=0 ppid=3021 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:22.264000 audit[3175]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.264000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdf875f00 a2=0 a3=1 items=0 ppid=3021 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.264000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 12 17:26:22.266000 audit[3177]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.266000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdc60d6c0 a2=0 a3=1 items=0 ppid=3021 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.266000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C73 Dec 12 17:26:22.270000 audit[3180]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.270000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd6e51c60 a2=0 a3=1 items=0 ppid=3021 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.270000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669636520706F7274616C Dec 12 17:26:22.271000 audit[3181]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.271000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc5a0f40 a2=0 a3=1 items=0 ppid=3021 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 12 17:26:22.273000 audit[3183]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.273000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe7cacf30 a2=0 a3=1 items=0 ppid=3021 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.273000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 12 17:26:22.274000 audit[3184]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.274000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1a53870 a2=0 a3=1 items=0 ppid=3021 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D740066696C746572 Dec 12 17:26:22.276000 audit[3186]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.276000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff75dc260 a2=0 a3=1 items=0 ppid=3021 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.276000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.279000 audit[3189]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.279000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffda4c88c0 a2=0 a3=1 items=0 ppid=3021 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.280000 audit[3190]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.280000 audit[3190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0be9fb0 a2=0 a3=1 items=0 ppid=3021 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.280000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D464F5257415244002D740066696C746572 Dec 12 17:26:22.282000 audit[3192]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.282000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff61d16e0 a2=0 a3=1 items=0 ppid=3021 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 12 17:26:22.283000 audit[3193]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.283000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc25d1d20 a2=0 a3=1 items=0 ppid=3021 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.283000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 12 17:26:22.286000 audit[3195]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.286000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe1debc10 a2=0 a3=1 items=0 ppid=3021 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F58 Dec 12 17:26:22.289000 audit[3198]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.289000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd7595710 a2=0 a3=1 items=0 ppid=3021 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D50524F Dec 12 17:26:22.293000 audit[3201]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.293000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe10af7a0 a2=0 a3=1 items=0 ppid=3021 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A004B5542452D5052 Dec 12 17:26:22.294000 audit[3202]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.294000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe23366a0 a2=0 a3=1 items=0 ppid=3021 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D5345525649434553002D74006E6174 Dec 12 17:26:22.296000 audit[3204]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.296000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe75754a0 a2=0 a3=1 items=0 ppid=3021 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.296000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.299000 audit[3207]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.299000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff24d6570 a2=0 a3=1 items=0 ppid=3021 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 12 17:26:22.300000 audit[3208]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.300000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffead31e20 a2=0 a3=1 items=0 ppid=3021 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.300000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 12 17:26:22.302000 audit[3210]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.302000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffea390480 a2=0 a3=1 items=0 ppid=3021 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 12 17:26:22.303000 audit[3211]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.303000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde562590 a2=0 a3=1 items=0 ppid=3021 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 12 17:26:22.305000 audit[3213]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.305000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd38774c0 a2=0 a3=1 items=0 ppid=3021 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.305000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:22.308000 audit[3216]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 12 17:26:22.308000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc7b36c20 a2=0 a3=1 items=0 ppid=3021 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 12 17:26:22.311000 audit[3218]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:26:22.311000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe9182420 a2=0 a3=1 items=0 ppid=3021 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.311000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:22.311000 audit[3218]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 12 17:26:22.311000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe9182420 a2=0 a3=1 items=0 ppid=3021 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:22.311000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:24.921169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount563827259.mount: Deactivated successfully. Dec 12 17:26:25.740058 containerd[1692]: time="2025-12-12T17:26:25.740000263Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:25.741336 containerd[1692]: time="2025-12-12T17:26:25.741287750Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22143261" Dec 12 17:26:25.742718 containerd[1692]: time="2025-12-12T17:26:25.742648517Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:25.744955 containerd[1692]: time="2025-12-12T17:26:25.744914848Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:25.745621 containerd[1692]: time="2025-12-12T17:26:25.745587252Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.564554874s" Dec 12 17:26:25.745660 containerd[1692]: time="2025-12-12T17:26:25.745620532Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Dec 12 17:26:25.750648 containerd[1692]: time="2025-12-12T17:26:25.750620557Z" level=info msg="CreateContainer within sandbox \"aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 12 17:26:25.758299 containerd[1692]: time="2025-12-12T17:26:25.758254556Z" level=info msg="Container b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:25.761155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884310572.mount: Deactivated successfully. Dec 12 17:26:25.765341 containerd[1692]: time="2025-12-12T17:26:25.765294192Z" level=info msg="CreateContainer within sandbox \"aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\"" Dec 12 17:26:25.765909 containerd[1692]: time="2025-12-12T17:26:25.765882955Z" level=info msg="StartContainer for \"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\"" Dec 12 17:26:25.768257 containerd[1692]: time="2025-12-12T17:26:25.768184367Z" level=info msg="connecting to shim b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8" address="unix:///run/containerd/s/3872b6132fd966fe1dd98ee5081e3f8313aee7738cae95bc137a79b84d0370d4" protocol=ttrpc version=3 Dec 12 17:26:25.786512 systemd[1]: Started cri-containerd-b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8.scope - libcontainer container b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8. Dec 12 17:26:25.795000 audit: BPF prog-id=146 op=LOAD Dec 12 17:26:25.796000 audit: BPF prog-id=147 op=LOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.796000 audit: BPF prog-id=147 op=UNLOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.796000 audit: BPF prog-id=148 op=LOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.796000 audit: BPF prog-id=149 op=LOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.796000 audit: BPF prog-id=149 op=UNLOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.796000 audit: BPF prog-id=148 op=UNLOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.796000 audit: BPF prog-id=150 op=LOAD Dec 12 17:26:25.796000 audit[3230]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3092 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:25.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6230353264346532346533313231366465366538613636616564323430 Dec 12 17:26:25.812334 containerd[1692]: time="2025-12-12T17:26:25.812298431Z" level=info msg="StartContainer for \"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\" returns successfully" Dec 12 17:26:26.023901 kubelet[2892]: I1212 17:26:26.023740 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-2fr8f" podStartSLOduration=1.457834184 podStartE2EDuration="5.023725945s" podCreationTimestamp="2025-12-12 17:26:21 +0000 UTC" firstStartedPulling="2025-12-12 17:26:22.180720736 +0000 UTC m=+7.304774014" lastFinishedPulling="2025-12-12 17:26:25.746612497 +0000 UTC m=+10.870665775" observedRunningTime="2025-12-12 17:26:26.023361623 +0000 UTC m=+11.147414901" watchObservedRunningTime="2025-12-12 17:26:26.023725945 +0000 UTC m=+11.147779183" Dec 12 17:26:27.807290 systemd[1]: cri-containerd-b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8.scope: Deactivated successfully. Dec 12 17:26:27.811761 containerd[1692]: time="2025-12-12T17:26:27.811634390Z" level=info msg="received container exit event container_id:\"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\" id:\"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\" pid:3242 exit_status:1 exited_at:{seconds:1765560387 nanos:811206028}" Dec 12 17:26:27.813000 audit: BPF prog-id=146 op=UNLOAD Dec 12 17:26:27.817139 kernel: kauditd_printk_skb: 224 callbacks suppressed Dec 12 17:26:27.817209 kernel: audit: type=1334 audit(1765560387.813:518): prog-id=146 op=UNLOAD Dec 12 17:26:27.817224 kernel: audit: type=1334 audit(1765560387.813:519): prog-id=150 op=UNLOAD Dec 12 17:26:27.813000 audit: BPF prog-id=150 op=UNLOAD Dec 12 17:26:27.831921 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8-rootfs.mount: Deactivated successfully. Dec 12 17:26:28.019366 kubelet[2892]: I1212 17:26:28.019061 2892 scope.go:117] "RemoveContainer" containerID="b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8" Dec 12 17:26:28.021681 containerd[1692]: time="2025-12-12T17:26:28.021641298Z" level=info msg="CreateContainer within sandbox \"aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 12 17:26:28.036677 containerd[1692]: time="2025-12-12T17:26:28.035994010Z" level=info msg="Container 570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:28.045908 containerd[1692]: time="2025-12-12T17:26:28.045841380Z" level=info msg="CreateContainer within sandbox \"aa888d9719ba95c67f47ab6098522b33c8b908c23660c503c44d71e4bba9a542\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a\"" Dec 12 17:26:28.050144 containerd[1692]: time="2025-12-12T17:26:28.048720595Z" level=info msg="StartContainer for \"570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a\"" Dec 12 17:26:28.050144 containerd[1692]: time="2025-12-12T17:26:28.049872361Z" level=info msg="connecting to shim 570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a" address="unix:///run/containerd/s/3872b6132fd966fe1dd98ee5081e3f8313aee7738cae95bc137a79b84d0370d4" protocol=ttrpc version=3 Dec 12 17:26:28.083431 systemd[1]: Started cri-containerd-570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a.scope - libcontainer container 570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a. Dec 12 17:26:28.095000 audit: BPF prog-id=151 op=LOAD Dec 12 17:26:28.098126 kernel: audit: type=1334 audit(1765560388.095:520): prog-id=151 op=LOAD Dec 12 17:26:28.099185 kernel: audit: type=1334 audit(1765560388.096:521): prog-id=152 op=LOAD Dec 12 17:26:28.099235 kernel: audit: type=1300 audit(1765560388.096:521): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.096000 audit: BPF prog-id=152 op=LOAD Dec 12 17:26:28.096000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.096000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.106094 kernel: audit: type=1327 audit(1765560388.096:521): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.106250 kernel: audit: type=1334 audit(1765560388.097:522): prog-id=152 op=UNLOAD Dec 12 17:26:28.097000 audit: BPF prog-id=152 op=UNLOAD Dec 12 17:26:28.097000 audit[3283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.109927 kernel: audit: type=1300 audit(1765560388.097:522): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.111151 kernel: audit: type=1327 audit(1765560388.097:522): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.097000 audit: BPF prog-id=153 op=LOAD Dec 12 17:26:28.118396 kernel: audit: type=1334 audit(1765560388.097:523): prog-id=153 op=LOAD Dec 12 17:26:28.097000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.097000 audit: BPF prog-id=154 op=LOAD Dec 12 17:26:28.097000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.097000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.101000 audit: BPF prog-id=154 op=UNLOAD Dec 12 17:26:28.101000 audit[3283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.101000 audit: BPF prog-id=153 op=UNLOAD Dec 12 17:26:28.101000 audit[3283]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.101000 audit: BPF prog-id=155 op=LOAD Dec 12 17:26:28.101000 audit[3283]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3092 pid=3283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:28.101000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3537306136396439373434393234383936386431366237376235373438 Dec 12 17:26:28.137066 containerd[1692]: time="2025-12-12T17:26:28.137022084Z" level=info msg="StartContainer for \"570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a\" returns successfully" Dec 12 17:26:31.163420 sudo[1937]: pam_unix(sudo:session): session closed for user root Dec 12 17:26:31.162000 audit[1937]: USER_END pid=1937 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:31.162000 audit[1937]: CRED_DISP pid=1937 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 12 17:26:31.320187 sshd[1936]: Connection closed by 139.178.89.65 port 33454 Dec 12 17:26:31.320698 sshd-session[1933]: pam_unix(sshd:session): session closed for user core Dec 12 17:26:31.320000 audit[1933]: USER_END pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:26:31.321000 audit[1933]: CRED_DISP pid=1933 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:26:31.325107 systemd[1]: sshd@6-10.0.11.71:22-139.178.89.65:33454.service: Deactivated successfully. Dec 12 17:26:31.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.11.71:22-139.178.89.65:33454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:26:31.329623 systemd[1]: session-7.scope: Deactivated successfully. Dec 12 17:26:31.331186 systemd[1]: session-7.scope: Consumed 7.155s CPU time, 225.6M memory peak. Dec 12 17:26:31.333206 systemd-logind[1666]: Session 7 logged out. Waiting for processes to exit. Dec 12 17:26:31.334022 systemd-logind[1666]: Removed session 7. Dec 12 17:26:32.714000 audit[3367]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:32.714000 audit[3367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd4ec7310 a2=0 a3=1 items=0 ppid=3021 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:32.714000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:32.720000 audit[3367]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:32.720000 audit[3367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd4ec7310 a2=0 a3=1 items=0 ppid=3021 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:32.720000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.736000 audit[3369]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.740523 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 12 17:26:33.740679 kernel: audit: type=1325 audit(1765560393.736:535): table=filter:107 family=2 entries=16 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.741158 kernel: audit: type=1300 audit(1765560393.736:535): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb28ce60 a2=0 a3=1 items=0 ppid=3021 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.736000 audit[3369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb28ce60 a2=0 a3=1 items=0 ppid=3021 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.736000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.746351 kernel: audit: type=1327 audit(1765560393.736:535): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.750000 audit[3369]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.750000 audit[3369]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb28ce60 a2=0 a3=1 items=0 ppid=3021 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.757487 kernel: audit: type=1325 audit(1765560393.750:536): table=nat:108 family=2 entries=12 op=nft_register_rule pid=3369 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:33.757558 kernel: audit: type=1300 audit(1765560393.750:536): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffb28ce60 a2=0 a3=1 items=0 ppid=3021 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:33.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:33.759830 kernel: audit: type=1327 audit(1765560393.750:536): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:36.514000 audit[3373]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:36.514000 audit[3373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffdef9930 a2=0 a3=1 items=0 ppid=3021 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:36.523935 kernel: audit: type=1325 audit(1765560396.514:537): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:36.524031 kernel: audit: type=1300 audit(1765560396.514:537): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffffdef9930 a2=0 a3=1 items=0 ppid=3021 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:36.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:36.527854 kernel: audit: type=1327 audit(1765560396.514:537): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:36.528269 kernel: audit: type=1325 audit(1765560396.520:538): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:36.520000 audit[3373]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:36.520000 audit[3373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffdef9930 a2=0 a3=1 items=0 ppid=3021 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:36.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:37.537000 audit[3375]: NETFILTER_CFG table=filter:111 family=2 entries=19 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:37.537000 audit[3375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcbd01cd0 a2=0 a3=1 items=0 ppid=3021 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:37.544000 audit[3375]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3375 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:37.544000 audit[3375]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcbd01cd0 a2=0 a3=1 items=0 ppid=3021 pid=3375 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:37.544000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:39.374000 audit[3377]: NETFILTER_CFG table=filter:113 family=2 entries=21 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:39.376376 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 12 17:26:39.376444 kernel: audit: type=1325 audit(1765560399.374:541): table=filter:113 family=2 entries=21 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:39.374000 audit[3377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc5c57560 a2=0 a3=1 items=0 ppid=3021 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.382268 kernel: audit: type=1300 audit(1765560399.374:541): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc5c57560 a2=0 a3=1 items=0 ppid=3021 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.382916 kernel: audit: type=1327 audit(1765560399.374:541): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:39.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:39.382000 audit[3377]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:39.387165 kernel: audit: type=1325 audit(1765560399.382:542): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3377 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:39.382000 audit[3377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5c57560 a2=0 a3=1 items=0 ppid=3021 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.391176 kernel: audit: type=1300 audit(1765560399.382:542): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5c57560 a2=0 a3=1 items=0 ppid=3021 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.391802 kernel: audit: type=1327 audit(1765560399.382:542): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:39.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:39.414286 systemd[1]: Created slice kubepods-besteffort-pod53cb8e2e_5bdf_4103_adb3_d1ad2e1f5f51.slice - libcontainer container kubepods-besteffort-pod53cb8e2e_5bdf_4103_adb3_d1ad2e1f5f51.slice. Dec 12 17:26:39.513851 kubelet[2892]: I1212 17:26:39.513799 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51-tigera-ca-bundle\") pod \"calico-typha-7c6c7cd5f7-x8vgd\" (UID: \"53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51\") " pod="calico-system/calico-typha-7c6c7cd5f7-x8vgd" Dec 12 17:26:39.513851 kubelet[2892]: I1212 17:26:39.513851 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51-typha-certs\") pod \"calico-typha-7c6c7cd5f7-x8vgd\" (UID: \"53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51\") " pod="calico-system/calico-typha-7c6c7cd5f7-x8vgd" Dec 12 17:26:39.514386 kubelet[2892]: I1212 17:26:39.513868 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf4k8\" (UniqueName: \"kubernetes.io/projected/53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51-kube-api-access-tf4k8\") pod \"calico-typha-7c6c7cd5f7-x8vgd\" (UID: \"53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51\") " pod="calico-system/calico-typha-7c6c7cd5f7-x8vgd" Dec 12 17:26:39.603805 systemd[1]: Created slice kubepods-besteffort-poda254442a_7427_49a8_b367_a270b7a232c0.slice - libcontainer container kubepods-besteffort-poda254442a_7427_49a8_b367_a270b7a232c0.slice. Dec 12 17:26:39.715394 kubelet[2892]: I1212 17:26:39.714940 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-flexvol-driver-host\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715394 kubelet[2892]: I1212 17:26:39.714987 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-policysync\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715394 kubelet[2892]: I1212 17:26:39.715005 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-var-run-calico\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715394 kubelet[2892]: I1212 17:26:39.715023 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-cni-bin-dir\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715394 kubelet[2892]: I1212 17:26:39.715090 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a254442a-7427-49a8-b367-a270b7a232c0-tigera-ca-bundle\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715634 kubelet[2892]: I1212 17:26:39.715174 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-var-lib-calico\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715634 kubelet[2892]: I1212 17:26:39.715219 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-cni-log-dir\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715634 kubelet[2892]: I1212 17:26:39.715245 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-cni-net-dir\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715634 kubelet[2892]: I1212 17:26:39.715274 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-lib-modules\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715634 kubelet[2892]: I1212 17:26:39.715312 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a254442a-7427-49a8-b367-a270b7a232c0-xtables-lock\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715761 kubelet[2892]: I1212 17:26:39.715377 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq5xl\" (UniqueName: \"kubernetes.io/projected/a254442a-7427-49a8-b367-a270b7a232c0-kube-api-access-nq5xl\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.715761 kubelet[2892]: I1212 17:26:39.715553 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a254442a-7427-49a8-b367-a270b7a232c0-node-certs\") pod \"calico-node-frftk\" (UID: \"a254442a-7427-49a8-b367-a270b7a232c0\") " pod="calico-system/calico-node-frftk" Dec 12 17:26:39.723455 containerd[1692]: time="2025-12-12T17:26:39.723274558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c6c7cd5f7-x8vgd,Uid:53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:39.745653 containerd[1692]: time="2025-12-12T17:26:39.745608111Z" level=info msg="connecting to shim da17f9ee0e678c94297a645bb0b78c5df6355efc08472293cf81c70028821cc3" address="unix:///run/containerd/s/e30e0e7f86274466f39485ae77338d6955c1b388a91d17786f9af38a2e524cdc" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:39.778464 systemd[1]: Started cri-containerd-da17f9ee0e678c94297a645bb0b78c5df6355efc08472293cf81c70028821cc3.scope - libcontainer container da17f9ee0e678c94297a645bb0b78c5df6355efc08472293cf81c70028821cc3. Dec 12 17:26:39.795043 kubelet[2892]: E1212 17:26:39.794993 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:26:39.794000 audit: BPF prog-id=156 op=LOAD Dec 12 17:26:39.797131 kernel: audit: type=1334 audit(1765560399.794:543): prog-id=156 op=LOAD Dec 12 17:26:39.796000 audit: BPF prog-id=157 op=LOAD Dec 12 17:26:39.796000 audit[3399]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.802966 kernel: audit: type=1334 audit(1765560399.796:544): prog-id=157 op=LOAD Dec 12 17:26:39.803039 kernel: audit: type=1300 audit(1765560399.796:544): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.803056 kernel: audit: type=1327 audit(1765560399.796:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.796000 audit: BPF prog-id=157 op=UNLOAD Dec 12 17:26:39.796000 audit[3399]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.796000 audit: BPF prog-id=158 op=LOAD Dec 12 17:26:39.796000 audit[3399]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.796000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.801000 audit: BPF prog-id=159 op=LOAD Dec 12 17:26:39.801000 audit[3399]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.801000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.804000 audit: BPF prog-id=159 op=UNLOAD Dec 12 17:26:39.804000 audit[3399]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.804000 audit: BPF prog-id=158 op=UNLOAD Dec 12 17:26:39.804000 audit[3399]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.804000 audit: BPF prog-id=160 op=LOAD Dec 12 17:26:39.804000 audit[3399]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3388 pid=3399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461313766396565306536373863393432393761363435626230623738 Dec 12 17:26:39.817787 kubelet[2892]: E1212 17:26:39.817751 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.817787 kubelet[2892]: W1212 17:26:39.817776 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.817787 kubelet[2892]: E1212 17:26:39.817796 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.824285 kubelet[2892]: E1212 17:26:39.824174 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.824285 kubelet[2892]: W1212 17:26:39.824196 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.824285 kubelet[2892]: E1212 17:26:39.824216 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.827732 kubelet[2892]: E1212 17:26:39.827708 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.827732 kubelet[2892]: W1212 17:26:39.827730 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.827866 kubelet[2892]: E1212 17:26:39.827748 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.836941 containerd[1692]: time="2025-12-12T17:26:39.836879495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c6c7cd5f7-x8vgd,Uid:53cb8e2e-5bdf-4103-adb3-d1ad2e1f5f51,Namespace:calico-system,Attempt:0,} returns sandbox id \"da17f9ee0e678c94297a645bb0b78c5df6355efc08472293cf81c70028821cc3\"" Dec 12 17:26:39.838558 containerd[1692]: time="2025-12-12T17:26:39.838519343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Dec 12 17:26:39.895038 kubelet[2892]: E1212 17:26:39.895009 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895038 kubelet[2892]: W1212 17:26:39.895031 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895038 kubelet[2892]: E1212 17:26:39.895048 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.895231 kubelet[2892]: E1212 17:26:39.895189 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895231 kubelet[2892]: W1212 17:26:39.895196 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895231 kubelet[2892]: E1212 17:26:39.895231 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.895372 kubelet[2892]: E1212 17:26:39.895359 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895372 kubelet[2892]: W1212 17:26:39.895368 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895411 kubelet[2892]: E1212 17:26:39.895376 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.895496 kubelet[2892]: E1212 17:26:39.895487 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895496 kubelet[2892]: W1212 17:26:39.895496 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895534 kubelet[2892]: E1212 17:26:39.895503 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.895642 kubelet[2892]: E1212 17:26:39.895632 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895642 kubelet[2892]: W1212 17:26:39.895641 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895684 kubelet[2892]: E1212 17:26:39.895649 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.895783 kubelet[2892]: E1212 17:26:39.895773 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895807 kubelet[2892]: W1212 17:26:39.895783 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895807 kubelet[2892]: E1212 17:26:39.895791 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.895935 kubelet[2892]: E1212 17:26:39.895925 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.895960 kubelet[2892]: W1212 17:26:39.895935 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.895960 kubelet[2892]: E1212 17:26:39.895943 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.896077 kubelet[2892]: E1212 17:26:39.896067 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.896103 kubelet[2892]: W1212 17:26:39.896076 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.896103 kubelet[2892]: E1212 17:26:39.896084 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.896276 kubelet[2892]: E1212 17:26:39.896265 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.896299 kubelet[2892]: W1212 17:26:39.896276 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.896299 kubelet[2892]: E1212 17:26:39.896285 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.896430 kubelet[2892]: E1212 17:26:39.896420 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.896455 kubelet[2892]: W1212 17:26:39.896431 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.896455 kubelet[2892]: E1212 17:26:39.896439 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.896597 kubelet[2892]: E1212 17:26:39.896587 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.896618 kubelet[2892]: W1212 17:26:39.896599 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.896618 kubelet[2892]: E1212 17:26:39.896606 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.896724 kubelet[2892]: E1212 17:26:39.896715 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.896746 kubelet[2892]: W1212 17:26:39.896724 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.896746 kubelet[2892]: E1212 17:26:39.896731 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.896856 kubelet[2892]: E1212 17:26:39.896848 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.896881 kubelet[2892]: W1212 17:26:39.896856 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.896881 kubelet[2892]: E1212 17:26:39.896863 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897000 kubelet[2892]: E1212 17:26:39.896989 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897000 kubelet[2892]: W1212 17:26:39.896999 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897040 kubelet[2892]: E1212 17:26:39.897006 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897133 kubelet[2892]: E1212 17:26:39.897122 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897133 kubelet[2892]: W1212 17:26:39.897131 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897175 kubelet[2892]: E1212 17:26:39.897139 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897265 kubelet[2892]: E1212 17:26:39.897255 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897265 kubelet[2892]: W1212 17:26:39.897264 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897303 kubelet[2892]: E1212 17:26:39.897272 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897407 kubelet[2892]: E1212 17:26:39.897398 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897432 kubelet[2892]: W1212 17:26:39.897408 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897432 kubelet[2892]: E1212 17:26:39.897415 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897531 kubelet[2892]: E1212 17:26:39.897522 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897552 kubelet[2892]: W1212 17:26:39.897531 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897552 kubelet[2892]: E1212 17:26:39.897538 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897652 kubelet[2892]: E1212 17:26:39.897643 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897674 kubelet[2892]: W1212 17:26:39.897652 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897674 kubelet[2892]: E1212 17:26:39.897658 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.897777 kubelet[2892]: E1212 17:26:39.897768 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.897798 kubelet[2892]: W1212 17:26:39.897776 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.897798 kubelet[2892]: E1212 17:26:39.897783 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.910799 containerd[1692]: time="2025-12-12T17:26:39.910759910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-frftk,Uid:a254442a-7427-49a8-b367-a270b7a232c0,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:39.917253 kubelet[2892]: E1212 17:26:39.917230 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.917253 kubelet[2892]: W1212 17:26:39.917253 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.917387 kubelet[2892]: E1212 17:26:39.917270 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.917387 kubelet[2892]: I1212 17:26:39.917295 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/131dba81-ae70-4090-a6eb-8ebf5f86d388-socket-dir\") pod \"csi-node-driver-5cw4v\" (UID: \"131dba81-ae70-4090-a6eb-8ebf5f86d388\") " pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:39.917493 kubelet[2892]: E1212 17:26:39.917428 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.917493 kubelet[2892]: W1212 17:26:39.917436 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.917493 kubelet[2892]: E1212 17:26:39.917444 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.917493 kubelet[2892]: I1212 17:26:39.917457 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlj4w\" (UniqueName: \"kubernetes.io/projected/131dba81-ae70-4090-a6eb-8ebf5f86d388-kube-api-access-nlj4w\") pod \"csi-node-driver-5cw4v\" (UID: \"131dba81-ae70-4090-a6eb-8ebf5f86d388\") " pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:39.917657 kubelet[2892]: E1212 17:26:39.917603 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.917657 kubelet[2892]: W1212 17:26:39.917611 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.917657 kubelet[2892]: E1212 17:26:39.917619 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.917657 kubelet[2892]: I1212 17:26:39.917641 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/131dba81-ae70-4090-a6eb-8ebf5f86d388-kubelet-dir\") pod \"csi-node-driver-5cw4v\" (UID: \"131dba81-ae70-4090-a6eb-8ebf5f86d388\") " pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:39.917800 kubelet[2892]: E1212 17:26:39.917779 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.917800 kubelet[2892]: W1212 17:26:39.917790 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.917800 kubelet[2892]: E1212 17:26:39.917798 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.917996 kubelet[2892]: I1212 17:26:39.917818 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/131dba81-ae70-4090-a6eb-8ebf5f86d388-registration-dir\") pod \"csi-node-driver-5cw4v\" (UID: \"131dba81-ae70-4090-a6eb-8ebf5f86d388\") " pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:39.918095 kubelet[2892]: E1212 17:26:39.918080 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.918175 kubelet[2892]: W1212 17:26:39.918162 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.918241 kubelet[2892]: E1212 17:26:39.918230 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.918516 kubelet[2892]: E1212 17:26:39.918436 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.918516 kubelet[2892]: W1212 17:26:39.918448 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.918516 kubelet[2892]: E1212 17:26:39.918457 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.918896 kubelet[2892]: E1212 17:26:39.918799 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.918896 kubelet[2892]: W1212 17:26:39.918813 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.918896 kubelet[2892]: E1212 17:26:39.918823 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.919040 kubelet[2892]: E1212 17:26:39.919028 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.919093 kubelet[2892]: W1212 17:26:39.919083 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.919186 kubelet[2892]: E1212 17:26:39.919174 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.919559 kubelet[2892]: E1212 17:26:39.919473 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.919559 kubelet[2892]: W1212 17:26:39.919487 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.919559 kubelet[2892]: E1212 17:26:39.919498 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.919856 kubelet[2892]: E1212 17:26:39.919844 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.919965 kubelet[2892]: W1212 17:26:39.919903 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.919965 kubelet[2892]: E1212 17:26:39.919918 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.920234 kubelet[2892]: E1212 17:26:39.920222 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.920370 kubelet[2892]: W1212 17:26:39.920305 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.920370 kubelet[2892]: E1212 17:26:39.920321 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.920425 kubelet[2892]: I1212 17:26:39.920363 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/131dba81-ae70-4090-a6eb-8ebf5f86d388-varrun\") pod \"csi-node-driver-5cw4v\" (UID: \"131dba81-ae70-4090-a6eb-8ebf5f86d388\") " pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:39.920607 kubelet[2892]: E1212 17:26:39.920595 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.920737 kubelet[2892]: W1212 17:26:39.920654 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.920737 kubelet[2892]: E1212 17:26:39.920669 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.921087 kubelet[2892]: E1212 17:26:39.921020 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.921087 kubelet[2892]: W1212 17:26:39.921033 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.921087 kubelet[2892]: E1212 17:26:39.921044 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.921357 kubelet[2892]: E1212 17:26:39.921258 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.921357 kubelet[2892]: W1212 17:26:39.921268 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.921531 kubelet[2892]: E1212 17:26:39.921419 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.921668 kubelet[2892]: E1212 17:26:39.921631 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:39.921668 kubelet[2892]: W1212 17:26:39.921641 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:39.921668 kubelet[2892]: E1212 17:26:39.921651 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:39.931894 containerd[1692]: time="2025-12-12T17:26:39.931848297Z" level=info msg="connecting to shim d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5" address="unix:///run/containerd/s/aea482691cd61399749bc796adc9d26f2b5cca251935d357440a99f12441c137" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:39.952376 systemd[1]: Started cri-containerd-d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5.scope - libcontainer container d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5. Dec 12 17:26:39.960000 audit: BPF prog-id=161 op=LOAD Dec 12 17:26:39.961000 audit: BPF prog-id=162 op=LOAD Dec 12 17:26:39.961000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.961000 audit: BPF prog-id=162 op=UNLOAD Dec 12 17:26:39.961000 audit[3486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.961000 audit: BPF prog-id=163 op=LOAD Dec 12 17:26:39.961000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.961000 audit: BPF prog-id=164 op=LOAD Dec 12 17:26:39.961000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.962000 audit: BPF prog-id=164 op=UNLOAD Dec 12 17:26:39.962000 audit[3486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.962000 audit: BPF prog-id=163 op=UNLOAD Dec 12 17:26:39.962000 audit[3486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.962000 audit: BPF prog-id=165 op=LOAD Dec 12 17:26:39.962000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:39.962000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438313738343531343364626534323662666564326465376463636364 Dec 12 17:26:39.976542 containerd[1692]: time="2025-12-12T17:26:39.976353403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-frftk,Uid:a254442a-7427-49a8-b367-a270b7a232c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\"" Dec 12 17:26:40.021410 kubelet[2892]: E1212 17:26:40.021364 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.021410 kubelet[2892]: W1212 17:26:40.021389 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.021410 kubelet[2892]: E1212 17:26:40.021407 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.021683 kubelet[2892]: E1212 17:26:40.021670 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.021683 kubelet[2892]: W1212 17:26:40.021680 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.021755 kubelet[2892]: E1212 17:26:40.021688 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.021911 kubelet[2892]: E1212 17:26:40.021883 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.021911 kubelet[2892]: W1212 17:26:40.021896 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.021911 kubelet[2892]: E1212 17:26:40.021904 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.022096 kubelet[2892]: E1212 17:26:40.022083 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.022096 kubelet[2892]: W1212 17:26:40.022094 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.022173 kubelet[2892]: E1212 17:26:40.022101 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.022307 kubelet[2892]: E1212 17:26:40.022295 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.022307 kubelet[2892]: W1212 17:26:40.022306 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.022370 kubelet[2892]: E1212 17:26:40.022314 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.022520 kubelet[2892]: E1212 17:26:40.022508 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.022520 kubelet[2892]: W1212 17:26:40.022518 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.022582 kubelet[2892]: E1212 17:26:40.022527 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.022679 kubelet[2892]: E1212 17:26:40.022668 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.022679 kubelet[2892]: W1212 17:26:40.022677 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.022758 kubelet[2892]: E1212 17:26:40.022685 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.022847 kubelet[2892]: E1212 17:26:40.022835 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.022847 kubelet[2892]: W1212 17:26:40.022845 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.022893 kubelet[2892]: E1212 17:26:40.022853 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.023013 kubelet[2892]: E1212 17:26:40.023002 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.023013 kubelet[2892]: W1212 17:26:40.023011 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.023066 kubelet[2892]: E1212 17:26:40.023019 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.023195 kubelet[2892]: E1212 17:26:40.023184 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.023195 kubelet[2892]: W1212 17:26:40.023193 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.023305 kubelet[2892]: E1212 17:26:40.023201 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.023379 kubelet[2892]: E1212 17:26:40.023366 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.023379 kubelet[2892]: W1212 17:26:40.023375 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.023458 kubelet[2892]: E1212 17:26:40.023384 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.023532 kubelet[2892]: E1212 17:26:40.023520 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.023532 kubelet[2892]: W1212 17:26:40.023529 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.023873 kubelet[2892]: E1212 17:26:40.023537 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.023873 kubelet[2892]: E1212 17:26:40.023734 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.023873 kubelet[2892]: W1212 17:26:40.023748 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.023873 kubelet[2892]: E1212 17:26:40.023761 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.024013 kubelet[2892]: E1212 17:26:40.024002 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.024219 kubelet[2892]: W1212 17:26:40.024051 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.024219 kubelet[2892]: E1212 17:26:40.024064 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.024430 kubelet[2892]: E1212 17:26:40.024399 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.024507 kubelet[2892]: W1212 17:26:40.024481 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.024558 kubelet[2892]: E1212 17:26:40.024548 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.024827 kubelet[2892]: E1212 17:26:40.024774 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.024827 kubelet[2892]: W1212 17:26:40.024785 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.024827 kubelet[2892]: E1212 17:26:40.024798 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.025143 kubelet[2892]: E1212 17:26:40.025103 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.025198 kubelet[2892]: W1212 17:26:40.025187 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.025274 kubelet[2892]: E1212 17:26:40.025263 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.025509 kubelet[2892]: E1212 17:26:40.025496 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.025688 kubelet[2892]: W1212 17:26:40.025558 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.025688 kubelet[2892]: E1212 17:26:40.025573 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.025819 kubelet[2892]: E1212 17:26:40.025807 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.025869 kubelet[2892]: W1212 17:26:40.025858 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.026009 kubelet[2892]: E1212 17:26:40.025912 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.026104 kubelet[2892]: E1212 17:26:40.026091 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.026195 kubelet[2892]: W1212 17:26:40.026183 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.026341 kubelet[2892]: E1212 17:26:40.026244 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.026444 kubelet[2892]: E1212 17:26:40.026433 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.026494 kubelet[2892]: W1212 17:26:40.026483 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.026551 kubelet[2892]: E1212 17:26:40.026539 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.026887 kubelet[2892]: E1212 17:26:40.026772 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.026887 kubelet[2892]: W1212 17:26:40.026784 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.026887 kubelet[2892]: E1212 17:26:40.026794 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.027039 kubelet[2892]: E1212 17:26:40.027026 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.027182 kubelet[2892]: W1212 17:26:40.027100 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.028267 kubelet[2892]: E1212 17:26:40.028236 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.028620 kubelet[2892]: E1212 17:26:40.028604 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.028683 kubelet[2892]: W1212 17:26:40.028621 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.028683 kubelet[2892]: E1212 17:26:40.028636 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.028836 kubelet[2892]: E1212 17:26:40.028822 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.028836 kubelet[2892]: W1212 17:26:40.028833 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.028879 kubelet[2892]: E1212 17:26:40.028842 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.037627 kubelet[2892]: E1212 17:26:40.037593 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:40.037627 kubelet[2892]: W1212 17:26:40.037614 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:40.037627 kubelet[2892]: E1212 17:26:40.037630 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:40.398000 audit[3540]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:40.398000 audit[3540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffccb27fb0 a2=0 a3=1 items=0 ppid=3021 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:40.398000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:40.406000 audit[3540]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:40.406000 audit[3540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffccb27fb0 a2=0 a3=1 items=0 ppid=3021 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:40.406000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:41.370465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147080478.mount: Deactivated successfully. Dec 12 17:26:41.783062 containerd[1692]: time="2025-12-12T17:26:41.783013144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:41.784517 containerd[1692]: time="2025-12-12T17:26:41.784468711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Dec 12 17:26:41.785683 containerd[1692]: time="2025-12-12T17:26:41.785612797Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:41.788013 containerd[1692]: time="2025-12-12T17:26:41.787764288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:41.788407 containerd[1692]: time="2025-12-12T17:26:41.788384051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.949812948s" Dec 12 17:26:41.788526 containerd[1692]: time="2025-12-12T17:26:41.788498531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Dec 12 17:26:41.790252 containerd[1692]: time="2025-12-12T17:26:41.790215260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Dec 12 17:26:41.804207 containerd[1692]: time="2025-12-12T17:26:41.803971490Z" level=info msg="CreateContainer within sandbox \"da17f9ee0e678c94297a645bb0b78c5df6355efc08472293cf81c70028821cc3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 12 17:26:41.816940 containerd[1692]: time="2025-12-12T17:26:41.816886956Z" level=info msg="Container 852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:41.826811 containerd[1692]: time="2025-12-12T17:26:41.826751086Z" level=info msg="CreateContainer within sandbox \"da17f9ee0e678c94297a645bb0b78c5df6355efc08472293cf81c70028821cc3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950\"" Dec 12 17:26:41.827338 containerd[1692]: time="2025-12-12T17:26:41.827250808Z" level=info msg="StartContainer for \"852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950\"" Dec 12 17:26:41.828589 containerd[1692]: time="2025-12-12T17:26:41.828551375Z" level=info msg="connecting to shim 852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950" address="unix:///run/containerd/s/e30e0e7f86274466f39485ae77338d6955c1b388a91d17786f9af38a2e524cdc" protocol=ttrpc version=3 Dec 12 17:26:41.847334 systemd[1]: Started cri-containerd-852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950.scope - libcontainer container 852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950. Dec 12 17:26:41.859000 audit: BPF prog-id=166 op=LOAD Dec 12 17:26:41.860000 audit: BPF prog-id=167 op=LOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.860000 audit: BPF prog-id=167 op=UNLOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.860000 audit: BPF prog-id=168 op=LOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.860000 audit: BPF prog-id=169 op=LOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.860000 audit: BPF prog-id=169 op=UNLOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.860000 audit: BPF prog-id=168 op=UNLOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.860000 audit: BPF prog-id=170 op=LOAD Dec 12 17:26:41.860000 audit[3551]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3388 pid=3551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:41.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3835326538613337623036623630633161643634646163303165623566 Dec 12 17:26:41.888584 containerd[1692]: time="2025-12-12T17:26:41.888546120Z" level=info msg="StartContainer for \"852e8a37b06b60c1ad64dac01eb5fa4984e796e793f943cc0eab0c0805ed0950\" returns successfully" Dec 12 17:26:41.968219 kubelet[2892]: E1212 17:26:41.967855 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:26:42.066961 kubelet[2892]: I1212 17:26:42.066697 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c6c7cd5f7-x8vgd" podStartSLOduration=1.11538127 podStartE2EDuration="3.066676865s" podCreationTimestamp="2025-12-12 17:26:39 +0000 UTC" firstStartedPulling="2025-12-12 17:26:39.838264382 +0000 UTC m=+24.962317660" lastFinishedPulling="2025-12-12 17:26:41.789559977 +0000 UTC m=+26.913613255" observedRunningTime="2025-12-12 17:26:42.066474544 +0000 UTC m=+27.190527942" watchObservedRunningTime="2025-12-12 17:26:42.066676865 +0000 UTC m=+27.190730143" Dec 12 17:26:42.110660 kubelet[2892]: E1212 17:26:42.110622 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.110660 kubelet[2892]: W1212 17:26:42.110646 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.110660 kubelet[2892]: E1212 17:26:42.110667 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.110898 kubelet[2892]: E1212 17:26:42.110827 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.110898 kubelet[2892]: W1212 17:26:42.110849 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.110898 kubelet[2892]: E1212 17:26:42.110865 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.111359 kubelet[2892]: E1212 17:26:42.110985 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.111359 kubelet[2892]: W1212 17:26:42.110997 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.111359 kubelet[2892]: E1212 17:26:42.111017 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.111359 kubelet[2892]: E1212 17:26:42.111204 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.111359 kubelet[2892]: W1212 17:26:42.111213 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.111359 kubelet[2892]: E1212 17:26:42.111222 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.112376 kubelet[2892]: E1212 17:26:42.112351 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.112376 kubelet[2892]: W1212 17:26:42.112367 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.112376 kubelet[2892]: E1212 17:26:42.112378 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.112555 kubelet[2892]: E1212 17:26:42.112531 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.112555 kubelet[2892]: W1212 17:26:42.112544 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.112555 kubelet[2892]: E1212 17:26:42.112553 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.113037 kubelet[2892]: E1212 17:26:42.112680 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.113037 kubelet[2892]: W1212 17:26:42.112688 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.113037 kubelet[2892]: E1212 17:26:42.112696 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.113255 kubelet[2892]: E1212 17:26:42.113210 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.113255 kubelet[2892]: W1212 17:26:42.113224 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.113255 kubelet[2892]: E1212 17:26:42.113234 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.113626 kubelet[2892]: E1212 17:26:42.113397 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.113626 kubelet[2892]: W1212 17:26:42.113406 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.113626 kubelet[2892]: E1212 17:26:42.113515 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.114096 kubelet[2892]: E1212 17:26:42.114079 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.114189 kubelet[2892]: W1212 17:26:42.114094 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.114189 kubelet[2892]: E1212 17:26:42.114129 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.114417 kubelet[2892]: E1212 17:26:42.114392 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.114417 kubelet[2892]: W1212 17:26:42.114402 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.114417 kubelet[2892]: E1212 17:26:42.114411 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.115839 kubelet[2892]: E1212 17:26:42.115812 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.116302 kubelet[2892]: W1212 17:26:42.115829 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.116302 kubelet[2892]: E1212 17:26:42.115870 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.116302 kubelet[2892]: E1212 17:26:42.116285 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.116302 kubelet[2892]: W1212 17:26:42.116296 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.116991 kubelet[2892]: E1212 17:26:42.116310 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.117388 kubelet[2892]: E1212 17:26:42.117351 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.117606 kubelet[2892]: W1212 17:26:42.117585 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.117651 kubelet[2892]: E1212 17:26:42.117609 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.120311 kubelet[2892]: E1212 17:26:42.120290 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.120311 kubelet[2892]: W1212 17:26:42.120305 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.120444 kubelet[2892]: E1212 17:26:42.120317 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.139393 kubelet[2892]: E1212 17:26:42.137846 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.139393 kubelet[2892]: W1212 17:26:42.137869 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.139393 kubelet[2892]: E1212 17:26:42.137887 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.140508 kubelet[2892]: E1212 17:26:42.140399 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.140508 kubelet[2892]: W1212 17:26:42.140419 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.140508 kubelet[2892]: E1212 17:26:42.140435 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.142846 kubelet[2892]: E1212 17:26:42.142809 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.142846 kubelet[2892]: W1212 17:26:42.142843 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.142939 kubelet[2892]: E1212 17:26:42.142858 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.143129 kubelet[2892]: E1212 17:26:42.143103 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.143129 kubelet[2892]: W1212 17:26:42.143126 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.143186 kubelet[2892]: E1212 17:26:42.143136 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.143700 kubelet[2892]: E1212 17:26:42.143679 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.143700 kubelet[2892]: W1212 17:26:42.143695 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.143777 kubelet[2892]: E1212 17:26:42.143706 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.145206 kubelet[2892]: E1212 17:26:42.145182 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.145206 kubelet[2892]: W1212 17:26:42.145206 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.145286 kubelet[2892]: E1212 17:26:42.145218 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.145453 kubelet[2892]: E1212 17:26:42.145438 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.145453 kubelet[2892]: W1212 17:26:42.145450 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.145497 kubelet[2892]: E1212 17:26:42.145458 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.145648 kubelet[2892]: E1212 17:26:42.145635 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.145648 kubelet[2892]: W1212 17:26:42.145646 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.145700 kubelet[2892]: E1212 17:26:42.145654 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.145805 kubelet[2892]: E1212 17:26:42.145793 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.145805 kubelet[2892]: W1212 17:26:42.145803 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.145851 kubelet[2892]: E1212 17:26:42.145810 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.145971 kubelet[2892]: E1212 17:26:42.145959 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.145993 kubelet[2892]: W1212 17:26:42.145971 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.145993 kubelet[2892]: E1212 17:26:42.145979 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.146169 kubelet[2892]: E1212 17:26:42.146154 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.146169 kubelet[2892]: W1212 17:26:42.146166 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.146225 kubelet[2892]: E1212 17:26:42.146175 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.146426 kubelet[2892]: E1212 17:26:42.146411 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.146426 kubelet[2892]: W1212 17:26:42.146423 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.146476 kubelet[2892]: E1212 17:26:42.146431 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.146732 kubelet[2892]: E1212 17:26:42.146718 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.146732 kubelet[2892]: W1212 17:26:42.146732 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.146775 kubelet[2892]: E1212 17:26:42.146768 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.146971 kubelet[2892]: E1212 17:26:42.146946 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.146971 kubelet[2892]: W1212 17:26:42.146958 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.146971 kubelet[2892]: E1212 17:26:42.146966 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.147154 kubelet[2892]: E1212 17:26:42.147141 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.147154 kubelet[2892]: W1212 17:26:42.147153 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.147199 kubelet[2892]: E1212 17:26:42.147161 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.147362 kubelet[2892]: E1212 17:26:42.147314 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.147362 kubelet[2892]: W1212 17:26:42.147327 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.147362 kubelet[2892]: E1212 17:26:42.147337 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.147534 kubelet[2892]: E1212 17:26:42.147495 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.147534 kubelet[2892]: W1212 17:26:42.147506 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.147534 kubelet[2892]: E1212 17:26:42.147513 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:42.147918 kubelet[2892]: E1212 17:26:42.147887 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:42.147918 kubelet[2892]: W1212 17:26:42.147903 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:42.147918 kubelet[2892]: E1212 17:26:42.147919 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.013542 containerd[1692]: time="2025-12-12T17:26:43.013497196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:43.015605 containerd[1692]: time="2025-12-12T17:26:43.015557326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:43.018299 containerd[1692]: time="2025-12-12T17:26:43.018265620Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:43.024200 containerd[1692]: time="2025-12-12T17:26:43.024160770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:43.024595 containerd[1692]: time="2025-12-12T17:26:43.024556772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.234178031s" Dec 12 17:26:43.024595 containerd[1692]: time="2025-12-12T17:26:43.024590612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Dec 12 17:26:43.033598 containerd[1692]: time="2025-12-12T17:26:43.033558178Z" level=info msg="CreateContainer within sandbox \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 12 17:26:43.044291 containerd[1692]: time="2025-12-12T17:26:43.044246672Z" level=info msg="Container 950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:43.053002 containerd[1692]: time="2025-12-12T17:26:43.052960797Z" level=info msg="CreateContainer within sandbox \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0\"" Dec 12 17:26:43.053653 containerd[1692]: time="2025-12-12T17:26:43.053622040Z" level=info msg="StartContainer for \"950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0\"" Dec 12 17:26:43.055311 containerd[1692]: time="2025-12-12T17:26:43.055273728Z" level=info msg="connecting to shim 950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0" address="unix:///run/containerd/s/aea482691cd61399749bc796adc9d26f2b5cca251935d357440a99f12441c137" protocol=ttrpc version=3 Dec 12 17:26:43.062342 kubelet[2892]: I1212 17:26:43.061983 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:26:43.079534 systemd[1]: Started cri-containerd-950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0.scope - libcontainer container 950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0. Dec 12 17:26:43.119000 audit: BPF prog-id=171 op=LOAD Dec 12 17:26:43.119000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3474 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303633306266343031376261393663326332373636376464333936 Dec 12 17:26:43.119000 audit: BPF prog-id=172 op=LOAD Dec 12 17:26:43.119000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3474 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303633306266343031376261393663326332373636376464333936 Dec 12 17:26:43.119000 audit: BPF prog-id=172 op=UNLOAD Dec 12 17:26:43.119000 audit[3632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303633306266343031376261393663326332373636376464333936 Dec 12 17:26:43.119000 audit: BPF prog-id=171 op=UNLOAD Dec 12 17:26:43.119000 audit[3632]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303633306266343031376261393663326332373636376464333936 Dec 12 17:26:43.119000 audit: BPF prog-id=173 op=LOAD Dec 12 17:26:43.119000 audit[3632]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3474 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:43.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935303633306266343031376261393663326332373636376464333936 Dec 12 17:26:43.127183 kubelet[2892]: E1212 17:26:43.127155 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.127413 kubelet[2892]: W1212 17:26:43.127295 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.127413 kubelet[2892]: E1212 17:26:43.127322 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.127705 kubelet[2892]: E1212 17:26:43.127537 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.127705 kubelet[2892]: W1212 17:26:43.127547 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.127705 kubelet[2892]: E1212 17:26:43.127591 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.127835 kubelet[2892]: E1212 17:26:43.127823 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.127889 kubelet[2892]: W1212 17:26:43.127879 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.127943 kubelet[2892]: E1212 17:26:43.127934 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.128193 kubelet[2892]: E1212 17:26:43.128179 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.128258 kubelet[2892]: W1212 17:26:43.128247 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.128318 kubelet[2892]: E1212 17:26:43.128308 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.128559 kubelet[2892]: E1212 17:26:43.128544 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.128677 kubelet[2892]: W1212 17:26:43.128663 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.128734 kubelet[2892]: E1212 17:26:43.128724 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.129075 kubelet[2892]: E1212 17:26:43.128987 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.129075 kubelet[2892]: W1212 17:26:43.128999 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.129075 kubelet[2892]: E1212 17:26:43.129009 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.129258 kubelet[2892]: E1212 17:26:43.129246 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.129311 kubelet[2892]: W1212 17:26:43.129301 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.129365 kubelet[2892]: E1212 17:26:43.129355 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.129583 kubelet[2892]: E1212 17:26:43.129570 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.129743 kubelet[2892]: W1212 17:26:43.129636 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.129743 kubelet[2892]: E1212 17:26:43.129649 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.129876 kubelet[2892]: E1212 17:26:43.129865 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.130001 kubelet[2892]: W1212 17:26:43.129915 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.130001 kubelet[2892]: E1212 17:26:43.129928 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.130151 kubelet[2892]: E1212 17:26:43.130137 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.130206 kubelet[2892]: W1212 17:26:43.130196 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.130330 kubelet[2892]: E1212 17:26:43.130245 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.130430 kubelet[2892]: E1212 17:26:43.130419 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.130482 kubelet[2892]: W1212 17:26:43.130472 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.130542 kubelet[2892]: E1212 17:26:43.130532 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.130836 kubelet[2892]: E1212 17:26:43.130742 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.130836 kubelet[2892]: W1212 17:26:43.130754 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.130836 kubelet[2892]: E1212 17:26:43.130763 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.131005 kubelet[2892]: E1212 17:26:43.130993 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.131061 kubelet[2892]: W1212 17:26:43.131051 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.131144 kubelet[2892]: E1212 17:26:43.131100 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.131464 kubelet[2892]: E1212 17:26:43.131369 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.131464 kubelet[2892]: W1212 17:26:43.131383 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.131464 kubelet[2892]: E1212 17:26:43.131393 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.131632 kubelet[2892]: E1212 17:26:43.131621 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.131694 kubelet[2892]: W1212 17:26:43.131682 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.131756 kubelet[2892]: E1212 17:26:43.131745 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.138052 containerd[1692]: time="2025-12-12T17:26:43.138018669Z" level=info msg="StartContainer for \"950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0\" returns successfully" Dec 12 17:26:43.149740 kubelet[2892]: E1212 17:26:43.149673 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.149740 kubelet[2892]: W1212 17:26:43.149691 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.149740 kubelet[2892]: E1212 17:26:43.149707 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.150007 kubelet[2892]: E1212 17:26:43.149915 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.150007 kubelet[2892]: W1212 17:26:43.149923 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.150007 kubelet[2892]: E1212 17:26:43.149932 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.150099 kubelet[2892]: E1212 17:26:43.150082 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.150099 kubelet[2892]: W1212 17:26:43.150094 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.150099 kubelet[2892]: E1212 17:26:43.150102 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.150354 kubelet[2892]: E1212 17:26:43.150332 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.150354 kubelet[2892]: W1212 17:26:43.150345 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.150354 kubelet[2892]: E1212 17:26:43.150352 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.150498 kubelet[2892]: E1212 17:26:43.150484 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.150498 kubelet[2892]: W1212 17:26:43.150494 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.150552 kubelet[2892]: E1212 17:26:43.150502 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.150641 kubelet[2892]: E1212 17:26:43.150624 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.150641 kubelet[2892]: W1212 17:26:43.150635 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.150641 kubelet[2892]: E1212 17:26:43.150642 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.150795 kubelet[2892]: E1212 17:26:43.150773 2892 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 12 17:26:43.150795 kubelet[2892]: W1212 17:26:43.150795 2892 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 12 17:26:43.150853 kubelet[2892]: E1212 17:26:43.150803 2892 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 12 17:26:43.152048 systemd[1]: cri-containerd-950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0.scope: Deactivated successfully. Dec 12 17:26:43.154424 containerd[1692]: time="2025-12-12T17:26:43.154382272Z" level=info msg="received container exit event container_id:\"950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0\" id:\"950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0\" pid:3645 exited_at:{seconds:1765560403 nanos:153531868}" Dec 12 17:26:43.156000 audit: BPF prog-id=173 op=UNLOAD Dec 12 17:26:43.176150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-950630bf4017ba96c2c27667dd396b55e2ebc3be5f2284689b85f54b1255ddc0-rootfs.mount: Deactivated successfully. Dec 12 17:26:43.968578 kubelet[2892]: E1212 17:26:43.968210 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:26:44.066496 containerd[1692]: time="2025-12-12T17:26:44.066443826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Dec 12 17:26:45.968575 kubelet[2892]: E1212 17:26:45.968526 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:26:46.210131 containerd[1692]: time="2025-12-12T17:26:46.209759637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:46.211197 containerd[1692]: time="2025-12-12T17:26:46.211145324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Dec 12 17:26:46.212386 containerd[1692]: time="2025-12-12T17:26:46.212348170Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:46.215024 containerd[1692]: time="2025-12-12T17:26:46.214574702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:46.215218 containerd[1692]: time="2025-12-12T17:26:46.215168265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.148685638s" Dec 12 17:26:46.215218 containerd[1692]: time="2025-12-12T17:26:46.215195345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Dec 12 17:26:46.220017 containerd[1692]: time="2025-12-12T17:26:46.219934729Z" level=info msg="CreateContainer within sandbox \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 12 17:26:46.229440 containerd[1692]: time="2025-12-12T17:26:46.229394377Z" level=info msg="Container 5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:46.233018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount84498142.mount: Deactivated successfully. Dec 12 17:26:46.239444 containerd[1692]: time="2025-12-12T17:26:46.239402708Z" level=info msg="CreateContainer within sandbox \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858\"" Dec 12 17:26:46.240304 containerd[1692]: time="2025-12-12T17:26:46.240151872Z" level=info msg="StartContainer for \"5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858\"" Dec 12 17:26:46.244135 containerd[1692]: time="2025-12-12T17:26:46.244007491Z" level=info msg="connecting to shim 5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858" address="unix:///run/containerd/s/aea482691cd61399749bc796adc9d26f2b5cca251935d357440a99f12441c137" protocol=ttrpc version=3 Dec 12 17:26:46.264546 systemd[1]: Started cri-containerd-5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858.scope - libcontainer container 5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858. Dec 12 17:26:46.325312 kubelet[2892]: I1212 17:26:46.324774 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:26:46.334068 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 12 17:26:46.334210 kernel: audit: type=1334 audit(1765560406.331:575): prog-id=174 op=LOAD Dec 12 17:26:46.331000 audit: BPF prog-id=174 op=LOAD Dec 12 17:26:46.331000 audit[3715]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.337890 kernel: audit: type=1300 audit(1765560406.331:575): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.343192 kernel: audit: type=1327 audit(1765560406.331:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.331000 audit: BPF prog-id=175 op=LOAD Dec 12 17:26:46.350355 kernel: audit: type=1334 audit(1765560406.331:576): prog-id=175 op=LOAD Dec 12 17:26:46.350392 kernel: audit: type=1300 audit(1765560406.331:576): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.350409 kernel: audit: type=1327 audit(1765560406.331:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.331000 audit[3715]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.331000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.332000 audit: BPF prog-id=175 op=UNLOAD Dec 12 17:26:46.354514 kernel: audit: type=1334 audit(1765560406.332:577): prog-id=175 op=UNLOAD Dec 12 17:26:46.354744 kernel: audit: type=1300 audit(1765560406.332:577): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.332000 audit[3715]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.361319 kernel: audit: type=1327 audit(1765560406.332:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.361631 kernel: audit: type=1334 audit(1765560406.332:578): prog-id=174 op=UNLOAD Dec 12 17:26:46.332000 audit: BPF prog-id=174 op=UNLOAD Dec 12 17:26:46.332000 audit[3715]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.332000 audit: BPF prog-id=176 op=LOAD Dec 12 17:26:46.332000 audit[3715]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3474 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.332000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538323164386237393837633033663834346135663462383362633466 Dec 12 17:26:46.364000 audit[3735]: NETFILTER_CFG table=filter:117 family=2 entries=21 op=nft_register_rule pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:46.364000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffef373110 a2=0 a3=1 items=0 ppid=3021 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.364000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:46.372000 audit[3735]: NETFILTER_CFG table=nat:118 family=2 entries=19 op=nft_register_chain pid=3735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:46.372000 audit[3735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffef373110 a2=0 a3=1 items=0 ppid=3021 pid=3735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:46.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:46.381404 containerd[1692]: time="2025-12-12T17:26:46.381360189Z" level=info msg="StartContainer for \"5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858\" returns successfully" Dec 12 17:26:46.772137 containerd[1692]: time="2025-12-12T17:26:46.771311931Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 12 17:26:46.804944 systemd[1]: cri-containerd-5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858.scope: Deactivated successfully. Dec 12 17:26:46.805140 containerd[1692]: time="2025-12-12T17:26:46.804898261Z" level=info msg="received container exit event container_id:\"5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858\" id:\"5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858\" pid:3728 exited_at:{seconds:1765560406 nanos:804679740}" Dec 12 17:26:46.805277 systemd[1]: cri-containerd-5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858.scope: Consumed 452ms CPU time, 192M memory peak, 165.9M written to disk. Dec 12 17:26:46.807000 audit: BPF prog-id=176 op=UNLOAD Dec 12 17:26:46.824863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5821d8b7987c03f844a5f4b83bc4fb2b3fa435805ade8af26a4183b17940d858-rootfs.mount: Deactivated successfully. Dec 12 17:26:46.860838 kubelet[2892]: I1212 17:26:46.860615 2892 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Dec 12 17:26:46.914973 systemd[1]: Created slice kubepods-besteffort-pod9edb9d6f_c2b4_4544_aec2_09be399b44d1.slice - libcontainer container kubepods-besteffort-pod9edb9d6f_c2b4_4544_aec2_09be399b44d1.slice. Dec 12 17:26:46.921393 systemd[1]: Created slice kubepods-besteffort-pode2b17dac_df63_4a54_9c33_9908026d55bd.slice - libcontainer container kubepods-besteffort-pode2b17dac_df63_4a54_9c33_9908026d55bd.slice. Dec 12 17:26:46.924788 systemd[1]: Created slice kubepods-burstable-pod85ab89f0_e1e8_4d9a_9a38_ab9ec6dd1fa5.slice - libcontainer container kubepods-burstable-pod85ab89f0_e1e8_4d9a_9a38_ab9ec6dd1fa5.slice. Dec 12 17:26:46.932664 systemd[1]: Created slice kubepods-burstable-podab177ad4_2ea1_44c1_83d1_270008bbbaa1.slice - libcontainer container kubepods-burstable-podab177ad4_2ea1_44c1_83d1_270008bbbaa1.slice. Dec 12 17:26:46.938295 systemd[1]: Created slice kubepods-besteffort-podc5623500_c182_4c24_ab8c_0d0ff956b10c.slice - libcontainer container kubepods-besteffort-podc5623500_c182_4c24_ab8c_0d0ff956b10c.slice. Dec 12 17:26:46.945200 systemd[1]: Created slice kubepods-besteffort-podf42fc071_54b4_491f_b752_90f8070727e3.slice - libcontainer container kubepods-besteffort-podf42fc071_54b4_491f_b752_90f8070727e3.slice. Dec 12 17:26:46.955486 systemd[1]: Created slice kubepods-besteffort-pod3dccdef9_807b_4475_ade1_4a0bc2c4fe76.slice - libcontainer container kubepods-besteffort-pod3dccdef9_807b_4475_ade1_4a0bc2c4fe76.slice. Dec 12 17:26:46.963020 systemd[1]: Created slice kubepods-besteffort-pod4c63d250_1806_4ef2_8959_7aad6322f80f.slice - libcontainer container kubepods-besteffort-pod4c63d250_1806_4ef2_8959_7aad6322f80f.slice. Dec 12 17:26:46.977501 kubelet[2892]: I1212 17:26:46.977465 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkw9\" (UniqueName: \"kubernetes.io/projected/9edb9d6f-c2b4-4544-aec2-09be399b44d1-kube-api-access-rrkw9\") pod \"calico-kube-controllers-7db4fd4bfb-cz8pj\" (UID: \"9edb9d6f-c2b4-4544-aec2-09be399b44d1\") " pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" Dec 12 17:26:46.977501 kubelet[2892]: I1212 17:26:46.977506 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmxp\" (UniqueName: \"kubernetes.io/projected/ab177ad4-2ea1-44c1-83d1-270008bbbaa1-kube-api-access-8bmxp\") pod \"coredns-66bc5c9577-jqzql\" (UID: \"ab177ad4-2ea1-44c1-83d1-270008bbbaa1\") " pod="kube-system/coredns-66bc5c9577-jqzql" Dec 12 17:26:46.977861 kubelet[2892]: I1212 17:26:46.977524 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6r8z\" (UniqueName: \"kubernetes.io/projected/85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5-kube-api-access-l6r8z\") pod \"coredns-66bc5c9577-wpvbh\" (UID: \"85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5\") " pod="kube-system/coredns-66bc5c9577-wpvbh" Dec 12 17:26:46.977861 kubelet[2892]: I1212 17:26:46.977540 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4c63d250-1806-4ef2-8959-7aad6322f80f-goldmane-key-pair\") pod \"goldmane-7c778bb748-c6kdq\" (UID: \"4c63d250-1806-4ef2-8959-7aad6322f80f\") " pod="calico-system/goldmane-7c778bb748-c6kdq" Dec 12 17:26:46.977861 kubelet[2892]: I1212 17:26:46.977567 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l84lj\" (UniqueName: \"kubernetes.io/projected/c5623500-c182-4c24-ab8c-0d0ff956b10c-kube-api-access-l84lj\") pod \"whisker-84d8f6ddd4-dxsbq\" (UID: \"c5623500-c182-4c24-ab8c-0d0ff956b10c\") " pod="calico-system/whisker-84d8f6ddd4-dxsbq" Dec 12 17:26:46.977861 kubelet[2892]: I1212 17:26:46.977583 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3dccdef9-807b-4475-ade1-4a0bc2c4fe76-calico-apiserver-certs\") pod \"calico-apiserver-67f4c54f9f-ttqlr\" (UID: \"3dccdef9-807b-4475-ade1-4a0bc2c4fe76\") " pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" Dec 12 17:26:46.977861 kubelet[2892]: I1212 17:26:46.977598 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ghgc\" (UniqueName: \"kubernetes.io/projected/4c63d250-1806-4ef2-8959-7aad6322f80f-kube-api-access-8ghgc\") pod \"goldmane-7c778bb748-c6kdq\" (UID: \"4c63d250-1806-4ef2-8959-7aad6322f80f\") " pod="calico-system/goldmane-7c778bb748-c6kdq" Dec 12 17:26:46.977980 kubelet[2892]: I1212 17:26:46.977617 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-ca-bundle\") pod \"whisker-84d8f6ddd4-dxsbq\" (UID: \"c5623500-c182-4c24-ab8c-0d0ff956b10c\") " pod="calico-system/whisker-84d8f6ddd4-dxsbq" Dec 12 17:26:46.977980 kubelet[2892]: I1212 17:26:46.977632 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4d6g\" (UniqueName: \"kubernetes.io/projected/3dccdef9-807b-4475-ade1-4a0bc2c4fe76-kube-api-access-n4d6g\") pod \"calico-apiserver-67f4c54f9f-ttqlr\" (UID: \"3dccdef9-807b-4475-ade1-4a0bc2c4fe76\") " pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" Dec 12 17:26:46.977980 kubelet[2892]: I1212 17:26:46.977646 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f42fc071-54b4-491f-b752-90f8070727e3-calico-apiserver-certs\") pod \"calico-apiserver-849959c995-dlz6z\" (UID: \"f42fc071-54b4-491f-b752-90f8070727e3\") " pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" Dec 12 17:26:46.977980 kubelet[2892]: I1212 17:26:46.977662 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2kvt\" (UniqueName: \"kubernetes.io/projected/f42fc071-54b4-491f-b752-90f8070727e3-kube-api-access-h2kvt\") pod \"calico-apiserver-849959c995-dlz6z\" (UID: \"f42fc071-54b4-491f-b752-90f8070727e3\") " pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" Dec 12 17:26:46.977980 kubelet[2892]: I1212 17:26:46.977678 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2b17dac-df63-4a54-9c33-9908026d55bd-calico-apiserver-certs\") pod \"calico-apiserver-67f4c54f9f-5ds7p\" (UID: \"e2b17dac-df63-4a54-9c33-9908026d55bd\") " pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" Dec 12 17:26:46.978082 kubelet[2892]: I1212 17:26:46.977697 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab177ad4-2ea1-44c1-83d1-270008bbbaa1-config-volume\") pod \"coredns-66bc5c9577-jqzql\" (UID: \"ab177ad4-2ea1-44c1-83d1-270008bbbaa1\") " pod="kube-system/coredns-66bc5c9577-jqzql" Dec 12 17:26:46.978082 kubelet[2892]: I1212 17:26:46.977713 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9edb9d6f-c2b4-4544-aec2-09be399b44d1-tigera-ca-bundle\") pod \"calico-kube-controllers-7db4fd4bfb-cz8pj\" (UID: \"9edb9d6f-c2b4-4544-aec2-09be399b44d1\") " pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" Dec 12 17:26:46.978082 kubelet[2892]: I1212 17:26:46.977730 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-backend-key-pair\") pod \"whisker-84d8f6ddd4-dxsbq\" (UID: \"c5623500-c182-4c24-ab8c-0d0ff956b10c\") " pod="calico-system/whisker-84d8f6ddd4-dxsbq" Dec 12 17:26:46.978082 kubelet[2892]: I1212 17:26:46.977745 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4msf\" (UniqueName: \"kubernetes.io/projected/e2b17dac-df63-4a54-9c33-9908026d55bd-kube-api-access-b4msf\") pod \"calico-apiserver-67f4c54f9f-5ds7p\" (UID: \"e2b17dac-df63-4a54-9c33-9908026d55bd\") " pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" Dec 12 17:26:46.978082 kubelet[2892]: I1212 17:26:46.977760 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5-config-volume\") pod \"coredns-66bc5c9577-wpvbh\" (UID: \"85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5\") " pod="kube-system/coredns-66bc5c9577-wpvbh" Dec 12 17:26:46.978220 kubelet[2892]: I1212 17:26:46.977775 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c63d250-1806-4ef2-8959-7aad6322f80f-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-c6kdq\" (UID: \"4c63d250-1806-4ef2-8959-7aad6322f80f\") " pod="calico-system/goldmane-7c778bb748-c6kdq" Dec 12 17:26:46.978220 kubelet[2892]: I1212 17:26:46.977793 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c63d250-1806-4ef2-8959-7aad6322f80f-config\") pod \"goldmane-7c778bb748-c6kdq\" (UID: \"4c63d250-1806-4ef2-8959-7aad6322f80f\") " pod="calico-system/goldmane-7c778bb748-c6kdq" Dec 12 17:26:47.078846 containerd[1692]: time="2025-12-12T17:26:47.078324611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Dec 12 17:26:47.222083 containerd[1692]: time="2025-12-12T17:26:47.222041661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db4fd4bfb-cz8pj,Uid:9edb9d6f-c2b4-4544-aec2-09be399b44d1,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:47.226914 containerd[1692]: time="2025-12-12T17:26:47.226882245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-5ds7p,Uid:e2b17dac-df63-4a54-9c33-9908026d55bd,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:47.235603 containerd[1692]: time="2025-12-12T17:26:47.235569730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wpvbh,Uid:85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:47.244982 containerd[1692]: time="2025-12-12T17:26:47.243320089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d8f6ddd4-dxsbq,Uid:c5623500-c182-4c24-ab8c-0d0ff956b10c,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:47.244982 containerd[1692]: time="2025-12-12T17:26:47.243698331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jqzql,Uid:ab177ad4-2ea1-44c1-83d1-270008bbbaa1,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:47.252613 containerd[1692]: time="2025-12-12T17:26:47.252569896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849959c995-dlz6z,Uid:f42fc071-54b4-491f-b752-90f8070727e3,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:47.264545 containerd[1692]: time="2025-12-12T17:26:47.264505397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-ttqlr,Uid:3dccdef9-807b-4475-ade1-4a0bc2c4fe76,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:47.272307 containerd[1692]: time="2025-12-12T17:26:47.272259796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c6kdq,Uid:4c63d250-1806-4ef2-8959-7aad6322f80f,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:47.359246 containerd[1692]: time="2025-12-12T17:26:47.359131517Z" level=error msg="Failed to destroy network for sandbox \"0811ccaf7b087f9510278ba497fb781756469de4ccaf680bda276d1efd60ad58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.362181 containerd[1692]: time="2025-12-12T17:26:47.362061532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jqzql,Uid:ab177ad4-2ea1-44c1-83d1-270008bbbaa1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0811ccaf7b087f9510278ba497fb781756469de4ccaf680bda276d1efd60ad58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.362596 kubelet[2892]: E1212 17:26:47.362549 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0811ccaf7b087f9510278ba497fb781756469de4ccaf680bda276d1efd60ad58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.362701 kubelet[2892]: E1212 17:26:47.362622 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0811ccaf7b087f9510278ba497fb781756469de4ccaf680bda276d1efd60ad58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jqzql" Dec 12 17:26:47.362701 kubelet[2892]: E1212 17:26:47.362643 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0811ccaf7b087f9510278ba497fb781756469de4ccaf680bda276d1efd60ad58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jqzql" Dec 12 17:26:47.362950 kubelet[2892]: E1212 17:26:47.362702 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jqzql_kube-system(ab177ad4-2ea1-44c1-83d1-270008bbbaa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jqzql_kube-system(ab177ad4-2ea1-44c1-83d1-270008bbbaa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0811ccaf7b087f9510278ba497fb781756469de4ccaf680bda276d1efd60ad58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jqzql" podUID="ab177ad4-2ea1-44c1-83d1-270008bbbaa1" Dec 12 17:26:47.370254 containerd[1692]: time="2025-12-12T17:26:47.370196694Z" level=error msg="Failed to destroy network for sandbox \"55db19bdeb0e329748e62d8edcfbacb4b0f6db189d21f0d9c9ac2713c5692683\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.373707 containerd[1692]: time="2025-12-12T17:26:47.373664951Z" level=error msg="Failed to destroy network for sandbox \"baffb9f78a10ab7fd632f4bcd13f3824cbf562d9fb0296ef8824cb55deeda44e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.375238 containerd[1692]: time="2025-12-12T17:26:47.375189199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-5ds7p,Uid:e2b17dac-df63-4a54-9c33-9908026d55bd,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55db19bdeb0e329748e62d8edcfbacb4b0f6db189d21f0d9c9ac2713c5692683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.375486 kubelet[2892]: E1212 17:26:47.375450 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55db19bdeb0e329748e62d8edcfbacb4b0f6db189d21f0d9c9ac2713c5692683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.375560 kubelet[2892]: E1212 17:26:47.375503 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55db19bdeb0e329748e62d8edcfbacb4b0f6db189d21f0d9c9ac2713c5692683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" Dec 12 17:26:47.375560 kubelet[2892]: E1212 17:26:47.375522 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55db19bdeb0e329748e62d8edcfbacb4b0f6db189d21f0d9c9ac2713c5692683\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" Dec 12 17:26:47.375685 kubelet[2892]: E1212 17:26:47.375566 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55db19bdeb0e329748e62d8edcfbacb4b0f6db189d21f0d9c9ac2713c5692683\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:26:47.375877 containerd[1692]: time="2025-12-12T17:26:47.375764882Z" level=error msg="Failed to destroy network for sandbox \"a69fc6c2f544a2d54ebc7bbcda333bb829c5e38288683e0fc5c5ffb74f4b1c8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.380865 containerd[1692]: time="2025-12-12T17:26:47.380786668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-ttqlr,Uid:3dccdef9-807b-4475-ade1-4a0bc2c4fe76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"baffb9f78a10ab7fd632f4bcd13f3824cbf562d9fb0296ef8824cb55deeda44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.381182 containerd[1692]: time="2025-12-12T17:26:47.381093509Z" level=error msg="Failed to destroy network for sandbox \"031ad28e273ea50757b14c782b3984b5319a0cc5b941ae35cf6b343214b7db59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.381235 kubelet[2892]: E1212 17:26:47.381110 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baffb9f78a10ab7fd632f4bcd13f3824cbf562d9fb0296ef8824cb55deeda44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.381292 kubelet[2892]: E1212 17:26:47.381234 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baffb9f78a10ab7fd632f4bcd13f3824cbf562d9fb0296ef8824cb55deeda44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" Dec 12 17:26:47.381292 kubelet[2892]: E1212 17:26:47.381255 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baffb9f78a10ab7fd632f4bcd13f3824cbf562d9fb0296ef8824cb55deeda44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" Dec 12 17:26:47.381497 kubelet[2892]: E1212 17:26:47.381403 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"baffb9f78a10ab7fd632f4bcd13f3824cbf562d9fb0296ef8824cb55deeda44e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:26:47.384244 containerd[1692]: time="2025-12-12T17:26:47.384189765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wpvbh,Uid:85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69fc6c2f544a2d54ebc7bbcda333bb829c5e38288683e0fc5c5ffb74f4b1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.384889 kubelet[2892]: E1212 17:26:47.384453 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69fc6c2f544a2d54ebc7bbcda333bb829c5e38288683e0fc5c5ffb74f4b1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.384889 kubelet[2892]: E1212 17:26:47.384520 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69fc6c2f544a2d54ebc7bbcda333bb829c5e38288683e0fc5c5ffb74f4b1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wpvbh" Dec 12 17:26:47.384889 kubelet[2892]: E1212 17:26:47.384540 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69fc6c2f544a2d54ebc7bbcda333bb829c5e38288683e0fc5c5ffb74f4b1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-wpvbh" Dec 12 17:26:47.385178 kubelet[2892]: E1212 17:26:47.384603 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-wpvbh_kube-system(85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-wpvbh_kube-system(85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a69fc6c2f544a2d54ebc7bbcda333bb829c5e38288683e0fc5c5ffb74f4b1c8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-wpvbh" podUID="85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5" Dec 12 17:26:47.386481 containerd[1692]: time="2025-12-12T17:26:47.386435816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c6kdq,Uid:4c63d250-1806-4ef2-8959-7aad6322f80f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"031ad28e273ea50757b14c782b3984b5319a0cc5b941ae35cf6b343214b7db59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.386649 kubelet[2892]: E1212 17:26:47.386616 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031ad28e273ea50757b14c782b3984b5319a0cc5b941ae35cf6b343214b7db59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.386700 kubelet[2892]: E1212 17:26:47.386687 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031ad28e273ea50757b14c782b3984b5319a0cc5b941ae35cf6b343214b7db59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c6kdq" Dec 12 17:26:47.386743 kubelet[2892]: E1212 17:26:47.386707 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031ad28e273ea50757b14c782b3984b5319a0cc5b941ae35cf6b343214b7db59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-c6kdq" Dec 12 17:26:47.386788 kubelet[2892]: E1212 17:26:47.386757 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"031ad28e273ea50757b14c782b3984b5319a0cc5b941ae35cf6b343214b7db59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:26:47.397588 containerd[1692]: time="2025-12-12T17:26:47.397541433Z" level=error msg="Failed to destroy network for sandbox \"cf51aac0b21cefca2420fdf2926cbb1479ff4eefd2678f160fa02a7330c1d083\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.398045 containerd[1692]: time="2025-12-12T17:26:47.398017115Z" level=error msg="Failed to destroy network for sandbox \"cf070d48448ade6a0d05e02075d27d3e8a95f0d5d639f1936d108f07529f83db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.402664 containerd[1692]: time="2025-12-12T17:26:47.402618738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db4fd4bfb-cz8pj,Uid:9edb9d6f-c2b4-4544-aec2-09be399b44d1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf070d48448ade6a0d05e02075d27d3e8a95f0d5d639f1936d108f07529f83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.403063 kubelet[2892]: E1212 17:26:47.403015 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf070d48448ade6a0d05e02075d27d3e8a95f0d5d639f1936d108f07529f83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.403148 kubelet[2892]: E1212 17:26:47.403080 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf070d48448ade6a0d05e02075d27d3e8a95f0d5d639f1936d108f07529f83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" Dec 12 17:26:47.403148 kubelet[2892]: E1212 17:26:47.403100 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf070d48448ade6a0d05e02075d27d3e8a95f0d5d639f1936d108f07529f83db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" Dec 12 17:26:47.403208 kubelet[2892]: E1212 17:26:47.403163 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf070d48448ade6a0d05e02075d27d3e8a95f0d5d639f1936d108f07529f83db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:26:47.403537 containerd[1692]: time="2025-12-12T17:26:47.403452343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84d8f6ddd4-dxsbq,Uid:c5623500-c182-4c24-ab8c-0d0ff956b10c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf51aac0b21cefca2420fdf2926cbb1479ff4eefd2678f160fa02a7330c1d083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.403776 kubelet[2892]: E1212 17:26:47.403736 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf51aac0b21cefca2420fdf2926cbb1479ff4eefd2678f160fa02a7330c1d083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.403776 kubelet[2892]: E1212 17:26:47.403771 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf51aac0b21cefca2420fdf2926cbb1479ff4eefd2678f160fa02a7330c1d083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84d8f6ddd4-dxsbq" Dec 12 17:26:47.403889 kubelet[2892]: E1212 17:26:47.403789 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf51aac0b21cefca2420fdf2926cbb1479ff4eefd2678f160fa02a7330c1d083\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84d8f6ddd4-dxsbq" Dec 12 17:26:47.403889 kubelet[2892]: E1212 17:26:47.403825 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84d8f6ddd4-dxsbq_calico-system(c5623500-c182-4c24-ab8c-0d0ff956b10c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84d8f6ddd4-dxsbq_calico-system(c5623500-c182-4c24-ab8c-0d0ff956b10c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf51aac0b21cefca2420fdf2926cbb1479ff4eefd2678f160fa02a7330c1d083\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84d8f6ddd4-dxsbq" podUID="c5623500-c182-4c24-ab8c-0d0ff956b10c" Dec 12 17:26:47.406959 containerd[1692]: time="2025-12-12T17:26:47.406923560Z" level=error msg="Failed to destroy network for sandbox \"719f05e7ed9746470c63c47cd1350f811143823e6dc3afa33eedbffa4499eb6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.409806 containerd[1692]: time="2025-12-12T17:26:47.409762935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849959c995-dlz6z,Uid:f42fc071-54b4-491f-b752-90f8070727e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"719f05e7ed9746470c63c47cd1350f811143823e6dc3afa33eedbffa4499eb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.410108 kubelet[2892]: E1212 17:26:47.410080 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719f05e7ed9746470c63c47cd1350f811143823e6dc3afa33eedbffa4499eb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:47.410197 kubelet[2892]: E1212 17:26:47.410156 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719f05e7ed9746470c63c47cd1350f811143823e6dc3afa33eedbffa4499eb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" Dec 12 17:26:47.410197 kubelet[2892]: E1212 17:26:47.410180 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"719f05e7ed9746470c63c47cd1350f811143823e6dc3afa33eedbffa4499eb6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" Dec 12 17:26:47.410248 kubelet[2892]: E1212 17:26:47.410225 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"719f05e7ed9746470c63c47cd1350f811143823e6dc3afa33eedbffa4499eb6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:26:47.972934 systemd[1]: Created slice kubepods-besteffort-pod131dba81_ae70_4090_a6eb_8ebf5f86d388.slice - libcontainer container kubepods-besteffort-pod131dba81_ae70_4090_a6eb_8ebf5f86d388.slice. Dec 12 17:26:47.979700 containerd[1692]: time="2025-12-12T17:26:47.979663591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cw4v,Uid:131dba81-ae70-4090-a6eb-8ebf5f86d388,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:48.023490 containerd[1692]: time="2025-12-12T17:26:48.023446893Z" level=error msg="Failed to destroy network for sandbox \"69eeae9d03e8fcbb986dce7bc75ed44d59dbc0b60fafa5ec05542059e90ec43f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:48.027077 containerd[1692]: time="2025-12-12T17:26:48.027041391Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cw4v,Uid:131dba81-ae70-4090-a6eb-8ebf5f86d388,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69eeae9d03e8fcbb986dce7bc75ed44d59dbc0b60fafa5ec05542059e90ec43f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:48.027312 kubelet[2892]: E1212 17:26:48.027268 2892 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69eeae9d03e8fcbb986dce7bc75ed44d59dbc0b60fafa5ec05542059e90ec43f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 12 17:26:48.028394 kubelet[2892]: E1212 17:26:48.027326 2892 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69eeae9d03e8fcbb986dce7bc75ed44d59dbc0b60fafa5ec05542059e90ec43f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:48.028394 kubelet[2892]: E1212 17:26:48.027345 2892 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69eeae9d03e8fcbb986dce7bc75ed44d59dbc0b60fafa5ec05542059e90ec43f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5cw4v" Dec 12 17:26:48.028394 kubelet[2892]: E1212 17:26:48.027394 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69eeae9d03e8fcbb986dce7bc75ed44d59dbc0b60fafa5ec05542059e90ec43f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:26:48.230387 systemd[1]: run-netns-cni\x2da10d8c74\x2d520f\x2d2f40\x2d227c\x2de1fe41576644.mount: Deactivated successfully. Dec 12 17:26:48.230477 systemd[1]: run-netns-cni\x2dd5bfc521\x2db739\x2d8f58\x2d6331\x2d925c0956b7ff.mount: Deactivated successfully. Dec 12 17:26:48.230529 systemd[1]: run-netns-cni\x2d276538db\x2d962e\x2d0eca\x2d6905\x2d97a752ad4fb2.mount: Deactivated successfully. Dec 12 17:26:48.230574 systemd[1]: run-netns-cni\x2d0c6a0f6f\x2d4664\x2d547a\x2d4423\x2d38eda55105f3.mount: Deactivated successfully. Dec 12 17:26:48.230612 systemd[1]: run-netns-cni\x2d05943c31\x2d2218\x2d5ed9\x2da960\x2d36c398e2ee18.mount: Deactivated successfully. Dec 12 17:26:50.601292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922276986.mount: Deactivated successfully. Dec 12 17:26:50.621057 containerd[1692]: time="2025-12-12T17:26:50.620981812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.622379 containerd[1692]: time="2025-12-12T17:26:50.622323379Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Dec 12 17:26:50.623339 containerd[1692]: time="2025-12-12T17:26:50.623310184Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.625329 containerd[1692]: time="2025-12-12T17:26:50.625289354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 12 17:26:50.625789 containerd[1692]: time="2025-12-12T17:26:50.625750716Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.547360745s" Dec 12 17:26:50.625789 containerd[1692]: time="2025-12-12T17:26:50.625782276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Dec 12 17:26:50.643737 containerd[1692]: time="2025-12-12T17:26:50.643693487Z" level=info msg="CreateContainer within sandbox \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 12 17:26:50.654950 containerd[1692]: time="2025-12-12T17:26:50.654904504Z" level=info msg="Container 68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:50.664629 containerd[1692]: time="2025-12-12T17:26:50.664506073Z" level=info msg="CreateContainer within sandbox \"d817845143dbe426bfed2de7dcccdddd69e7c5fbfa24ab89106818c54004d0e5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef\"" Dec 12 17:26:50.665024 containerd[1692]: time="2025-12-12T17:26:50.664997036Z" level=info msg="StartContainer for \"68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef\"" Dec 12 17:26:50.666836 containerd[1692]: time="2025-12-12T17:26:50.666801405Z" level=info msg="connecting to shim 68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef" address="unix:///run/containerd/s/aea482691cd61399749bc796adc9d26f2b5cca251935d357440a99f12441c137" protocol=ttrpc version=3 Dec 12 17:26:50.690348 systemd[1]: Started cri-containerd-68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef.scope - libcontainer container 68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef. Dec 12 17:26:50.757000 audit: BPF prog-id=177 op=LOAD Dec 12 17:26:50.757000 audit[4074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3474 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323531643264373337313532653561303066336166366664643435 Dec 12 17:26:50.758000 audit: BPF prog-id=178 op=LOAD Dec 12 17:26:50.758000 audit[4074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3474 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323531643264373337313532653561303066336166366664643435 Dec 12 17:26:50.758000 audit: BPF prog-id=178 op=UNLOAD Dec 12 17:26:50.758000 audit[4074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323531643264373337313532653561303066336166366664643435 Dec 12 17:26:50.758000 audit: BPF prog-id=177 op=UNLOAD Dec 12 17:26:50.758000 audit[4074]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323531643264373337313532653561303066336166366664643435 Dec 12 17:26:50.758000 audit: BPF prog-id=179 op=LOAD Dec 12 17:26:50.758000 audit[4074]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3474 pid=4074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:50.758000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3638323531643264373337313532653561303066336166366664643435 Dec 12 17:26:50.780353 containerd[1692]: time="2025-12-12T17:26:50.780239141Z" level=info msg="StartContainer for \"68251d2d737152e5a00f3af6fdd458f95479f4f3565ae5f5f2f2218b47ff6eef\" returns successfully" Dec 12 17:26:50.921173 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 12 17:26:50.921288 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 12 17:26:51.105152 kubelet[2892]: I1212 17:26:51.104548 2892 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-backend-key-pair\") pod \"c5623500-c182-4c24-ab8c-0d0ff956b10c\" (UID: \"c5623500-c182-4c24-ab8c-0d0ff956b10c\") " Dec 12 17:26:51.105152 kubelet[2892]: I1212 17:26:51.104601 2892 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l84lj\" (UniqueName: \"kubernetes.io/projected/c5623500-c182-4c24-ab8c-0d0ff956b10c-kube-api-access-l84lj\") pod \"c5623500-c182-4c24-ab8c-0d0ff956b10c\" (UID: \"c5623500-c182-4c24-ab8c-0d0ff956b10c\") " Dec 12 17:26:51.105152 kubelet[2892]: I1212 17:26:51.104634 2892 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-ca-bundle\") pod \"c5623500-c182-4c24-ab8c-0d0ff956b10c\" (UID: \"c5623500-c182-4c24-ab8c-0d0ff956b10c\") " Dec 12 17:26:51.105152 kubelet[2892]: I1212 17:26:51.105001 2892 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c5623500-c182-4c24-ab8c-0d0ff956b10c" (UID: "c5623500-c182-4c24-ab8c-0d0ff956b10c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 12 17:26:51.109640 kubelet[2892]: I1212 17:26:51.109606 2892 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c5623500-c182-4c24-ab8c-0d0ff956b10c" (UID: "c5623500-c182-4c24-ab8c-0d0ff956b10c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 12 17:26:51.109819 kubelet[2892]: I1212 17:26:51.109739 2892 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5623500-c182-4c24-ab8c-0d0ff956b10c-kube-api-access-l84lj" (OuterVolumeSpecName: "kube-api-access-l84lj") pod "c5623500-c182-4c24-ab8c-0d0ff956b10c" (UID: "c5623500-c182-4c24-ab8c-0d0ff956b10c"). InnerVolumeSpecName "kube-api-access-l84lj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 12 17:26:51.205390 kubelet[2892]: I1212 17:26:51.205252 2892 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-backend-key-pair\") on node \"ci-4515-1-0-e-d121438740\" DevicePath \"\"" Dec 12 17:26:51.205390 kubelet[2892]: I1212 17:26:51.205321 2892 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l84lj\" (UniqueName: \"kubernetes.io/projected/c5623500-c182-4c24-ab8c-0d0ff956b10c-kube-api-access-l84lj\") on node \"ci-4515-1-0-e-d121438740\" DevicePath \"\"" Dec 12 17:26:51.205390 kubelet[2892]: I1212 17:26:51.205352 2892 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5623500-c182-4c24-ab8c-0d0ff956b10c-whisker-ca-bundle\") on node \"ci-4515-1-0-e-d121438740\" DevicePath \"\"" Dec 12 17:26:51.393749 systemd[1]: Removed slice kubepods-besteffort-podc5623500_c182_4c24_ab8c_0d0ff956b10c.slice - libcontainer container kubepods-besteffort-podc5623500_c182_4c24_ab8c_0d0ff956b10c.slice. Dec 12 17:26:51.407972 kubelet[2892]: I1212 17:26:51.407705 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-frftk" podStartSLOduration=1.75900294 podStartE2EDuration="12.407689129s" podCreationTimestamp="2025-12-12 17:26:39 +0000 UTC" firstStartedPulling="2025-12-12 17:26:39.97770185 +0000 UTC m=+25.101755128" lastFinishedPulling="2025-12-12 17:26:50.626388039 +0000 UTC m=+35.750441317" observedRunningTime="2025-12-12 17:26:51.10677416 +0000 UTC m=+36.230827438" watchObservedRunningTime="2025-12-12 17:26:51.407689129 +0000 UTC m=+36.531742407" Dec 12 17:26:51.457920 systemd[1]: Created slice kubepods-besteffort-pod6cf9d953_a32f_4658_a2fe_c69d46e96850.slice - libcontainer container kubepods-besteffort-pod6cf9d953_a32f_4658_a2fe_c69d46e96850.slice. Dec 12 17:26:51.507698 kubelet[2892]: I1212 17:26:51.507653 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6cf9d953-a32f-4658-a2fe-c69d46e96850-whisker-backend-key-pair\") pod \"whisker-67768bd4f8-7kbcr\" (UID: \"6cf9d953-a32f-4658-a2fe-c69d46e96850\") " pod="calico-system/whisker-67768bd4f8-7kbcr" Dec 12 17:26:51.507698 kubelet[2892]: I1212 17:26:51.507702 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cf9d953-a32f-4658-a2fe-c69d46e96850-whisker-ca-bundle\") pod \"whisker-67768bd4f8-7kbcr\" (UID: \"6cf9d953-a32f-4658-a2fe-c69d46e96850\") " pod="calico-system/whisker-67768bd4f8-7kbcr" Dec 12 17:26:51.507853 kubelet[2892]: I1212 17:26:51.507719 2892 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvq4k\" (UniqueName: \"kubernetes.io/projected/6cf9d953-a32f-4658-a2fe-c69d46e96850-kube-api-access-cvq4k\") pod \"whisker-67768bd4f8-7kbcr\" (UID: \"6cf9d953-a32f-4658-a2fe-c69d46e96850\") " pod="calico-system/whisker-67768bd4f8-7kbcr" Dec 12 17:26:51.601408 systemd[1]: var-lib-kubelet-pods-c5623500\x2dc182\x2d4c24\x2dab8c\x2d0d0ff956b10c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl84lj.mount: Deactivated successfully. Dec 12 17:26:51.601511 systemd[1]: var-lib-kubelet-pods-c5623500\x2dc182\x2d4c24\x2dab8c\x2d0d0ff956b10c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Dec 12 17:26:51.764651 containerd[1692]: time="2025-12-12T17:26:51.764616863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67768bd4f8-7kbcr,Uid:6cf9d953-a32f-4658-a2fe-c69d46e96850,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:51.895786 systemd-networkd[1604]: cali3ec35bda773: Link UP Dec 12 17:26:51.896690 systemd-networkd[1604]: cali3ec35bda773: Gained carrier Dec 12 17:26:51.911538 containerd[1692]: 2025-12-12 17:26:51.788 [INFO][4140] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 12 17:26:51.911538 containerd[1692]: 2025-12-12 17:26:51.807 [INFO][4140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0 whisker-67768bd4f8- calico-system 6cf9d953-a32f-4658-a2fe-c69d46e96850 900 0 2025-12-12 17:26:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67768bd4f8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 whisker-67768bd4f8-7kbcr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3ec35bda773 [] [] }} ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-" Dec 12 17:26:51.911538 containerd[1692]: 2025-12-12 17:26:51.807 [INFO][4140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.911538 containerd[1692]: 2025-12-12 17:26:51.850 [INFO][4154] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" HandleID="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Workload="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.850 [INFO][4154] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" HandleID="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Workload="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-d121438740", "pod":"whisker-67768bd4f8-7kbcr", "timestamp":"2025-12-12 17:26:51.850194858 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.850 [INFO][4154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.850 [INFO][4154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.850 [INFO][4154] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.861 [INFO][4154] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.865 [INFO][4154] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.869 [INFO][4154] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.871 [INFO][4154] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911750 containerd[1692]: 2025-12-12 17:26:51.873 [INFO][4154] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.873 [INFO][4154] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.875 [INFO][4154] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867 Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.879 [INFO][4154] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.886 [INFO][4154] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.193/26] block=192.168.99.192/26 handle="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.886 [INFO][4154] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.193/26] handle="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.886 [INFO][4154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:51.911933 containerd[1692]: 2025-12-12 17:26:51.886 [INFO][4154] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.193/26] IPv6=[] ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" HandleID="k8s-pod-network.3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Workload="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.912059 containerd[1692]: 2025-12-12 17:26:51.889 [INFO][4140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0", GenerateName:"whisker-67768bd4f8-", Namespace:"calico-system", SelfLink:"", UID:"6cf9d953-a32f-4658-a2fe-c69d46e96850", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67768bd4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"whisker-67768bd4f8-7kbcr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3ec35bda773", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:51.912059 containerd[1692]: 2025-12-12 17:26:51.889 [INFO][4140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.193/32] ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.912140 containerd[1692]: 2025-12-12 17:26:51.889 [INFO][4140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ec35bda773 ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.912140 containerd[1692]: 2025-12-12 17:26:51.898 [INFO][4140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.912196 containerd[1692]: 2025-12-12 17:26:51.898 [INFO][4140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0", GenerateName:"whisker-67768bd4f8-", Namespace:"calico-system", SelfLink:"", UID:"6cf9d953-a32f-4658-a2fe-c69d46e96850", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67768bd4f8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867", Pod:"whisker-67768bd4f8-7kbcr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.99.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3ec35bda773", MAC:"6e:01:da:ef:07:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:51.912244 containerd[1692]: 2025-12-12 17:26:51.908 [INFO][4140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" Namespace="calico-system" Pod="whisker-67768bd4f8-7kbcr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-whisker--67768bd4f8--7kbcr-eth0" Dec 12 17:26:51.941125 containerd[1692]: time="2025-12-12T17:26:51.940601077Z" level=info msg="connecting to shim 3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867" address="unix:///run/containerd/s/2584d5d69e0209324d6dc1c0cb812f1b6041722950c931aaca2d65604f6dbcd0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:51.970438 systemd[1]: Started cri-containerd-3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867.scope - libcontainer container 3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867. Dec 12 17:26:51.979000 audit: BPF prog-id=180 op=LOAD Dec 12 17:26:51.981272 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 12 17:26:51.981390 kernel: audit: type=1334 audit(1765560411.979:588): prog-id=180 op=LOAD Dec 12 17:26:51.979000 audit: BPF prog-id=181 op=LOAD Dec 12 17:26:51.982689 kernel: audit: type=1334 audit(1765560411.979:589): prog-id=181 op=LOAD Dec 12 17:26:51.982718 kernel: audit: type=1300 audit(1765560411.979:589): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.979000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.989986 kernel: audit: type=1327 audit(1765560411.979:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.990477 kernel: audit: type=1334 audit(1765560411.980:590): prog-id=181 op=UNLOAD Dec 12 17:26:51.980000 audit: BPF prog-id=181 op=UNLOAD Dec 12 17:26:51.980000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.994437 kernel: audit: type=1300 audit(1765560411.980:590): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.994630 kernel: audit: type=1327 audit(1765560411.980:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.980000 audit: BPF prog-id=182 op=LOAD Dec 12 17:26:51.998907 kernel: audit: type=1334 audit(1765560411.980:591): prog-id=182 op=LOAD Dec 12 17:26:51.998999 kernel: audit: type=1300 audit(1765560411.980:591): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.980000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:52.005936 kernel: audit: type=1327 audit(1765560411.980:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.980000 audit: BPF prog-id=183 op=LOAD Dec 12 17:26:51.980000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.981000 audit: BPF prog-id=183 op=UNLOAD Dec 12 17:26:51.981000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.981000 audit: BPF prog-id=182 op=UNLOAD Dec 12 17:26:51.981000 audit[4188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:51.981000 audit: BPF prog-id=184 op=LOAD Dec 12 17:26:51.981000 audit[4188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4178 pid=4188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:51.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363373462636332666634326432323230343536346332663662316134 Dec 12 17:26:52.020755 containerd[1692]: time="2025-12-12T17:26:52.020627124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67768bd4f8-7kbcr,Uid:6cf9d953-a32f-4658-a2fe-c69d46e96850,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c74bcc2ff42d22204564c2f6b1a4d476ecb25e06474bac9fe693de2b243f867\"" Dec 12 17:26:52.023374 containerd[1692]: time="2025-12-12T17:26:52.023102376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:26:52.092238 kubelet[2892]: I1212 17:26:52.092205 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:26:52.371591 containerd[1692]: time="2025-12-12T17:26:52.371201425Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:52.372711 containerd[1692]: time="2025-12-12T17:26:52.372656512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:26:52.372798 containerd[1692]: time="2025-12-12T17:26:52.372676553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:52.372982 kubelet[2892]: E1212 17:26:52.372943 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:52.373547 kubelet[2892]: E1212 17:26:52.372993 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:26:52.373547 kubelet[2892]: E1212 17:26:52.373074 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:52.373910 containerd[1692]: time="2025-12-12T17:26:52.373839798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:26:52.528000 audit: BPF prog-id=185 op=LOAD Dec 12 17:26:52.528000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffca341288 a2=98 a3=ffffca341278 items=0 ppid=4229 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:52.528000 audit: BPF prog-id=185 op=UNLOAD Dec 12 17:26:52.528000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffca341258 a3=0 items=0 ppid=4229 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:52.528000 audit: BPF prog-id=186 op=LOAD Dec 12 17:26:52.528000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffca341138 a2=74 a3=95 items=0 ppid=4229 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:52.528000 audit: BPF prog-id=186 op=UNLOAD Dec 12 17:26:52.528000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4229 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:52.528000 audit: BPF prog-id=187 op=LOAD Dec 12 17:26:52.528000 audit[4342]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffca341168 a2=40 a3=ffffca341198 items=0 ppid=4229 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:52.528000 audit: BPF prog-id=187 op=UNLOAD Dec 12 17:26:52.528000 audit[4342]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffca341198 items=0 ppid=4229 pid=4342 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.528000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Dec 12 17:26:52.529000 audit: BPF prog-id=188 op=LOAD Dec 12 17:26:52.529000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffda4c1eb8 a2=98 a3=ffffda4c1ea8 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.529000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.530000 audit: BPF prog-id=188 op=UNLOAD Dec 12 17:26:52.530000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffda4c1e88 a3=0 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.530000 audit: BPF prog-id=189 op=LOAD Dec 12 17:26:52.530000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda4c1b48 a2=74 a3=95 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.530000 audit: BPF prog-id=189 op=UNLOAD Dec 12 17:26:52.530000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.530000 audit: BPF prog-id=190 op=LOAD Dec 12 17:26:52.530000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda4c1ba8 a2=94 a3=2 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.530000 audit: BPF prog-id=190 op=UNLOAD Dec 12 17:26:52.530000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.530000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.631000 audit: BPF prog-id=191 op=LOAD Dec 12 17:26:52.631000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffda4c1b68 a2=40 a3=ffffda4c1b98 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.631000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.631000 audit: BPF prog-id=191 op=UNLOAD Dec 12 17:26:52.631000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffda4c1b98 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.631000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.640000 audit: BPF prog-id=192 op=LOAD Dec 12 17:26:52.640000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda4c1b78 a2=94 a3=4 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.640000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.640000 audit: BPF prog-id=192 op=UNLOAD Dec 12 17:26:52.640000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.640000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.641000 audit: BPF prog-id=193 op=LOAD Dec 12 17:26:52.641000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffda4c19b8 a2=94 a3=5 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.641000 audit: BPF prog-id=193 op=UNLOAD Dec 12 17:26:52.641000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.641000 audit: BPF prog-id=194 op=LOAD Dec 12 17:26:52.641000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda4c1be8 a2=94 a3=6 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.641000 audit: BPF prog-id=194 op=UNLOAD Dec 12 17:26:52.641000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.641000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.642000 audit: BPF prog-id=195 op=LOAD Dec 12 17:26:52.642000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffda4c13b8 a2=94 a3=83 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.642000 audit: BPF prog-id=196 op=LOAD Dec 12 17:26:52.642000 audit[4343]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffda4c1178 a2=94 a3=2 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.642000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.643000 audit: BPF prog-id=196 op=UNLOAD Dec 12 17:26:52.643000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.643000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.643000 audit: BPF prog-id=195 op=UNLOAD Dec 12 17:26:52.643000 audit[4343]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3514f620 a3=35142b00 items=0 ppid=4229 pid=4343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.643000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 12 17:26:52.653000 audit: BPF prog-id=197 op=LOAD Dec 12 17:26:52.653000 audit[4346]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc7bd198 a2=98 a3=ffffdc7bd188 items=0 ppid=4229 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.653000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:52.654000 audit: BPF prog-id=197 op=UNLOAD Dec 12 17:26:52.654000 audit[4346]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdc7bd168 a3=0 items=0 ppid=4229 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.654000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:52.654000 audit: BPF prog-id=198 op=LOAD Dec 12 17:26:52.654000 audit[4346]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc7bd048 a2=74 a3=95 items=0 ppid=4229 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.654000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:52.654000 audit: BPF prog-id=198 op=UNLOAD Dec 12 17:26:52.654000 audit[4346]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4229 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.654000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:52.654000 audit: BPF prog-id=199 op=LOAD Dec 12 17:26:52.654000 audit[4346]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc7bd078 a2=40 a3=ffffdc7bd0a8 items=0 ppid=4229 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.654000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:52.654000 audit: BPF prog-id=199 op=UNLOAD Dec 12 17:26:52.654000 audit[4346]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdc7bd0a8 items=0 ppid=4229 pid=4346 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.654000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 12 17:26:52.721515 systemd-networkd[1604]: vxlan.calico: Link UP Dec 12 17:26:52.721528 systemd-networkd[1604]: vxlan.calico: Gained carrier Dec 12 17:26:52.724000 audit: BPF prog-id=200 op=LOAD Dec 12 17:26:52.724000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd63b15a8 a2=98 a3=ffffd63b1598 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.724000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=200 op=UNLOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd63b1578 a3=0 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=201 op=LOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd63b1288 a2=74 a3=95 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=201 op=UNLOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=202 op=LOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd63b12e8 a2=94 a3=2 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=202 op=UNLOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=203 op=LOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd63b1168 a2=40 a3=ffffd63b1198 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=203 op=UNLOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffd63b1198 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=204 op=LOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd63b12b8 a2=94 a3=b7 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.725000 audit: BPF prog-id=204 op=UNLOAD Dec 12 17:26:52.725000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.725000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.727833 containerd[1692]: time="2025-12-12T17:26:52.727758357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:52.726000 audit: BPF prog-id=205 op=LOAD Dec 12 17:26:52.726000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd63b0968 a2=94 a3=2 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.726000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.727000 audit: BPF prog-id=205 op=UNLOAD Dec 12 17:26:52.727000 audit[4372]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.727000 audit: BPF prog-id=206 op=LOAD Dec 12 17:26:52.727000 audit[4372]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd63b0af8 a2=94 a3=30 items=0 ppid=4229 pid=4372 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.727000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 12 17:26:52.730507 containerd[1692]: time="2025-12-12T17:26:52.730453890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:26:52.731780 containerd[1692]: time="2025-12-12T17:26:52.730487211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:52.731961 kubelet[2892]: E1212 17:26:52.731924 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:52.732011 kubelet[2892]: E1212 17:26:52.731970 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:26:52.732074 kubelet[2892]: E1212 17:26:52.732056 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:52.732262 kubelet[2892]: E1212 17:26:52.732102 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:26:52.731000 audit: BPF prog-id=207 op=LOAD Dec 12 17:26:52.731000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd48e96f8 a2=98 a3=ffffd48e96e8 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.731000 audit: BPF prog-id=207 op=UNLOAD Dec 12 17:26:52.731000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd48e96c8 a3=0 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.731000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.732000 audit: BPF prog-id=208 op=LOAD Dec 12 17:26:52.732000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd48e9388 a2=74 a3=95 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.732000 audit: BPF prog-id=208 op=UNLOAD Dec 12 17:26:52.732000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.732000 audit: BPF prog-id=209 op=LOAD Dec 12 17:26:52.732000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd48e93e8 a2=94 a3=2 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.732000 audit: BPF prog-id=209 op=UNLOAD Dec 12 17:26:52.732000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.732000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.834000 audit: BPF prog-id=210 op=LOAD Dec 12 17:26:52.834000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd48e93a8 a2=40 a3=ffffd48e93d8 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.834000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.834000 audit: BPF prog-id=210 op=UNLOAD Dec 12 17:26:52.834000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd48e93d8 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.834000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=211 op=LOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd48e93b8 a2=94 a3=4 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=211 op=UNLOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=212 op=LOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd48e91f8 a2=94 a3=5 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=212 op=UNLOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=213 op=LOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd48e9428 a2=94 a3=6 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=213 op=UNLOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.844000 audit: BPF prog-id=214 op=LOAD Dec 12 17:26:52.844000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd48e8bf8 a2=94 a3=83 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.844000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.845000 audit: BPF prog-id=215 op=LOAD Dec 12 17:26:52.845000 audit[4377]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd48e89b8 a2=94 a3=2 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.845000 audit: BPF prog-id=215 op=UNLOAD Dec 12 17:26:52.845000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.845000 audit: BPF prog-id=214 op=UNLOAD Dec 12 17:26:52.845000 audit[4377]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=e48c620 a3=e47fb00 items=0 ppid=4229 pid=4377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.845000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 12 17:26:52.860000 audit: BPF prog-id=206 op=UNLOAD Dec 12 17:26:52.860000 audit[4229]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000912200 a2=0 a3=0 items=0 ppid=4217 pid=4229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.860000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Dec 12 17:26:52.911000 audit[4403]: NETFILTER_CFG table=mangle:119 family=2 entries=16 op=nft_register_chain pid=4403 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:52.911000 audit[4403]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffece1b000 a2=0 a3=ffffb375bfa8 items=0 ppid=4229 pid=4403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.911000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:52.912000 audit[4404]: NETFILTER_CFG table=nat:120 family=2 entries=15 op=nft_register_chain pid=4404 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:52.912000 audit[4404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffff4bb1f20 a2=0 a3=ffffbc6b3fa8 items=0 ppid=4229 pid=4404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.912000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:52.918000 audit[4402]: NETFILTER_CFG table=raw:121 family=2 entries=21 op=nft_register_chain pid=4402 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:52.918000 audit[4402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd9070a90 a2=0 a3=ffff885b7fa8 items=0 ppid=4229 pid=4402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.918000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:52.923000 audit[4406]: NETFILTER_CFG table=filter:122 family=2 entries=94 op=nft_register_chain pid=4406 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:52.923000 audit[4406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffcbfeec50 a2=0 a3=ffff95ecdfa8 items=0 ppid=4229 pid=4406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:52.923000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:52.970809 kubelet[2892]: I1212 17:26:52.970771 2892 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5623500-c182-4c24-ab8c-0d0ff956b10c" path="/var/lib/kubelet/pods/c5623500-c182-4c24-ab8c-0d0ff956b10c/volumes" Dec 12 17:26:53.095429 kubelet[2892]: E1212 17:26:53.095368 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:26:53.116000 audit[4415]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:53.116000 audit[4415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcc95d840 a2=0 a3=1 items=0 ppid=3021 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.116000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:53.121000 audit[4415]: NETFILTER_CFG table=nat:124 family=2 entries=14 op=nft_register_rule pid=4415 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:53.121000 audit[4415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcc95d840 a2=0 a3=1 items=0 ppid=3021 pid=4415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:53.121000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:53.171329 systemd-networkd[1604]: cali3ec35bda773: Gained IPv6LL Dec 12 17:26:54.619205 kubelet[2892]: I1212 17:26:54.619127 2892 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 12 17:26:54.644385 systemd-networkd[1604]: vxlan.calico: Gained IPv6LL Dec 12 17:26:57.970851 containerd[1692]: time="2025-12-12T17:26:57.970664117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-ttqlr,Uid:3dccdef9-807b-4475-ade1-4a0bc2c4fe76,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:58.088897 systemd-networkd[1604]: cali0f89dd070ac: Link UP Dec 12 17:26:58.089063 systemd-networkd[1604]: cali0f89dd070ac: Gained carrier Dec 12 17:26:58.101135 containerd[1692]: 2025-12-12 17:26:58.018 [INFO][4481] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0 calico-apiserver-67f4c54f9f- calico-apiserver 3dccdef9-807b-4475-ade1-4a0bc2c4fe76 841 0 2025-12-12 17:26:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f4c54f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 calico-apiserver-67f4c54f9f-ttqlr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f89dd070ac [] [] }} ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-" Dec 12 17:26:58.101135 containerd[1692]: 2025-12-12 17:26:58.018 [INFO][4481] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.101135 containerd[1692]: 2025-12-12 17:26:58.042 [INFO][4495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" HandleID="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.043 [INFO][4495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" HandleID="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd8e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-e-d121438740", "pod":"calico-apiserver-67f4c54f9f-ttqlr", "timestamp":"2025-12-12 17:26:58.042888004 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.043 [INFO][4495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.043 [INFO][4495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.043 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.053 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.060 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.065 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.067 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101323 containerd[1692]: 2025-12-12 17:26:58.069 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.069 [INFO][4495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.071 [INFO][4495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1 Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.075 [INFO][4495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.083 [INFO][4495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.194/26] block=192.168.99.192/26 handle="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.083 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.194/26] handle="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.084 [INFO][4495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:58.101500 containerd[1692]: 2025-12-12 17:26:58.084 [INFO][4495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.194/26] IPv6=[] ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" HandleID="k8s-pod-network.f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.101633 containerd[1692]: 2025-12-12 17:26:58.086 [INFO][4481] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0", GenerateName:"calico-apiserver-67f4c54f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dccdef9-807b-4475-ade1-4a0bc2c4fe76", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f4c54f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"calico-apiserver-67f4c54f9f-ttqlr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f89dd070ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:58.101684 containerd[1692]: 2025-12-12 17:26:58.086 [INFO][4481] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.194/32] ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.101684 containerd[1692]: 2025-12-12 17:26:58.086 [INFO][4481] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f89dd070ac ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.101684 containerd[1692]: 2025-12-12 17:26:58.088 [INFO][4481] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.101738 containerd[1692]: 2025-12-12 17:26:58.089 [INFO][4481] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0", GenerateName:"calico-apiserver-67f4c54f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3dccdef9-807b-4475-ade1-4a0bc2c4fe76", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f4c54f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1", Pod:"calico-apiserver-67f4c54f9f-ttqlr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f89dd070ac", MAC:"66:2e:60:19:09:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:58.101784 containerd[1692]: 2025-12-12 17:26:58.097 [INFO][4481] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-ttqlr" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--ttqlr-eth0" Dec 12 17:26:58.113000 audit[4514]: NETFILTER_CFG table=filter:125 family=2 entries=50 op=nft_register_chain pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:58.115660 kernel: kauditd_printk_skb: 216 callbacks suppressed Dec 12 17:26:58.115695 kernel: audit: type=1325 audit(1765560418.113:664): table=filter:125 family=2 entries=50 op=nft_register_chain pid=4514 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:58.113000 audit[4514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffeb5212f0 a2=0 a3=ffffa0a2afa8 items=0 ppid=4229 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.121420 kernel: audit: type=1300 audit(1765560418.113:664): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffeb5212f0 a2=0 a3=ffffa0a2afa8 items=0 ppid=4229 pid=4514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.113000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:58.123719 kernel: audit: type=1327 audit(1765560418.113:664): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:58.137260 containerd[1692]: time="2025-12-12T17:26:58.137218243Z" level=info msg="connecting to shim f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1" address="unix:///run/containerd/s/aadeece5a8e3052d6ddde51ae28422a75be1926d5379d048ea8860beabddb5dd" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:58.174210 systemd[1]: Started cri-containerd-f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1.scope - libcontainer container f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1. Dec 12 17:26:58.185000 audit: BPF prog-id=216 op=LOAD Dec 12 17:26:58.188151 kernel: audit: type=1334 audit(1765560418.185:665): prog-id=216 op=LOAD Dec 12 17:26:58.187000 audit: BPF prog-id=217 op=LOAD Dec 12 17:26:58.187000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.192344 kernel: audit: type=1334 audit(1765560418.187:666): prog-id=217 op=LOAD Dec 12 17:26:58.192446 kernel: audit: type=1300 audit(1765560418.187:666): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.192467 kernel: audit: type=1327 audit(1765560418.187:666): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.187000 audit: BPF prog-id=217 op=UNLOAD Dec 12 17:26:58.196715 kernel: audit: type=1334 audit(1765560418.187:667): prog-id=217 op=UNLOAD Dec 12 17:26:58.196775 kernel: audit: type=1300 audit(1765560418.187:667): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.187000 audit[4534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.200131 kernel: audit: type=1327 audit(1765560418.187:667): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.187000 audit: BPF prog-id=218 op=LOAD Dec 12 17:26:58.187000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.187000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.193000 audit: BPF prog-id=219 op=LOAD Dec 12 17:26:58.193000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.196000 audit: BPF prog-id=219 op=UNLOAD Dec 12 17:26:58.196000 audit[4534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.196000 audit: BPF prog-id=218 op=UNLOAD Dec 12 17:26:58.196000 audit[4534]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.196000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.198000 audit: BPF prog-id=220 op=LOAD Dec 12 17:26:58.198000 audit[4534]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4523 pid=4534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:58.198000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633343130303865316334326465323634323334616264656165376232 Dec 12 17:26:58.278405 containerd[1692]: time="2025-12-12T17:26:58.278260880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-ttqlr,Uid:3dccdef9-807b-4475-ade1-4a0bc2c4fe76,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f341008e1c42de264234abdeae7b21f69cb2b57071c793a582acecf179b017d1\"" Dec 12 17:26:58.281521 containerd[1692]: time="2025-12-12T17:26:58.281483736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:58.632772 containerd[1692]: time="2025-12-12T17:26:58.632622640Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:58.634037 containerd[1692]: time="2025-12-12T17:26:58.633972647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:58.634162 containerd[1692]: time="2025-12-12T17:26:58.634050208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:58.634349 kubelet[2892]: E1212 17:26:58.634241 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:58.634349 kubelet[2892]: E1212 17:26:58.634286 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:58.634767 kubelet[2892]: E1212 17:26:58.634350 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:58.634767 kubelet[2892]: E1212 17:26:58.634382 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:26:58.973109 containerd[1692]: time="2025-12-12T17:26:58.972972330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wpvbh,Uid:85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5,Namespace:kube-system,Attempt:0,}" Dec 12 17:26:58.975895 containerd[1692]: time="2025-12-12T17:26:58.975860384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c6kdq,Uid:4c63d250-1806-4ef2-8959-7aad6322f80f,Namespace:calico-system,Attempt:0,}" Dec 12 17:26:58.977834 containerd[1692]: time="2025-12-12T17:26:58.977792794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-5ds7p,Uid:e2b17dac-df63-4a54-9c33-9908026d55bd,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:59.112698 kubelet[2892]: E1212 17:26:59.112619 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:26:59.129448 systemd-networkd[1604]: cali8c46125468f: Link UP Dec 12 17:26:59.129627 systemd-networkd[1604]: cali8c46125468f: Gained carrier Dec 12 17:26:59.134000 audit[4630]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:59.134000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdca69410 a2=0 a3=1 items=0 ppid=3021 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.134000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:59.141000 audit[4630]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:26:59.142814 containerd[1692]: 2025-12-12 17:26:59.046 [INFO][4560] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0 coredns-66bc5c9577- kube-system 85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5 839 0 2025-12-12 17:26:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 coredns-66bc5c9577-wpvbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8c46125468f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-" Dec 12 17:26:59.142814 containerd[1692]: 2025-12-12 17:26:59.046 [INFO][4560] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.142814 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4611] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" HandleID="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Workload="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4611] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" HandleID="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Workload="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3250), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-e-d121438740", "pod":"coredns-66bc5c9577-wpvbh", "timestamp":"2025-12-12 17:26:59.075588371 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4611] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4611] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4611] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.084 [INFO][4611] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.091 [INFO][4611] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.099 [INFO][4611] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.102 [INFO][4611] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.142956 containerd[1692]: 2025-12-12 17:26:59.106 [INFO][4611] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.106 [INFO][4611] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.108 [INFO][4611] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.114 [INFO][4611] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.121 [INFO][4611] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.195/26] block=192.168.99.192/26 handle="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.121 [INFO][4611] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.195/26] handle="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.121 [INFO][4611] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:59.143165 containerd[1692]: 2025-12-12 17:26:59.121 [INFO][4611] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.195/26] IPv6=[] ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" HandleID="k8s-pod-network.2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Workload="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.143292 containerd[1692]: 2025-12-12 17:26:59.125 [INFO][4560] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"coredns-66bc5c9577-wpvbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c46125468f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:59.143292 containerd[1692]: 2025-12-12 17:26:59.126 [INFO][4560] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.195/32] ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.143292 containerd[1692]: 2025-12-12 17:26:59.126 [INFO][4560] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c46125468f ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.143292 containerd[1692]: 2025-12-12 17:26:59.128 [INFO][4560] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.143292 containerd[1692]: 2025-12-12 17:26:59.129 [INFO][4560] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db", Pod:"coredns-66bc5c9577-wpvbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8c46125468f", MAC:"12:c4:1e:ce:df:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:59.143453 containerd[1692]: 2025-12-12 17:26:59.138 [INFO][4560] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" Namespace="kube-system" Pod="coredns-66bc5c9577-wpvbh" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--wpvbh-eth0" Dec 12 17:26:59.141000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdca69410 a2=0 a3=1 items=0 ppid=3021 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.141000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:26:59.156000 audit[4639]: NETFILTER_CFG table=filter:128 family=2 entries=46 op=nft_register_chain pid=4639 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:59.156000 audit[4639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=ffffff6d3010 a2=0 a3=ffff9d29ffa8 items=0 ppid=4229 pid=4639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.156000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:59.172154 containerd[1692]: time="2025-12-12T17:26:59.171549379Z" level=info msg="connecting to shim 2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db" address="unix:///run/containerd/s/d7d01dca3416470dde1762a639ba387f96c680b215df95fd0308f35858c283bb" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:59.193689 systemd[1]: Started cri-containerd-2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db.scope - libcontainer container 2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db. Dec 12 17:26:59.204000 audit: BPF prog-id=221 op=LOAD Dec 12 17:26:59.205000 audit: BPF prog-id=222 op=LOAD Dec 12 17:26:59.205000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.205000 audit: BPF prog-id=222 op=UNLOAD Dec 12 17:26:59.205000 audit[4658]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.205000 audit: BPF prog-id=223 op=LOAD Dec 12 17:26:59.205000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.206000 audit: BPF prog-id=224 op=LOAD Dec 12 17:26:59.206000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.206000 audit: BPF prog-id=224 op=UNLOAD Dec 12 17:26:59.206000 audit[4658]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.206000 audit: BPF prog-id=223 op=UNLOAD Dec 12 17:26:59.206000 audit[4658]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.206000 audit: BPF prog-id=225 op=LOAD Dec 12 17:26:59.206000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4648 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.206000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3266383139303531313037323831376432626432333462623139326537 Dec 12 17:26:59.237087 systemd-networkd[1604]: cali248ab93f43a: Link UP Dec 12 17:26:59.240073 systemd-networkd[1604]: cali248ab93f43a: Gained carrier Dec 12 17:26:59.249538 containerd[1692]: time="2025-12-12T17:26:59.249489335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-wpvbh,Uid:85ab89f0-e1e8-4d9a-9a38-ab9ec6dd1fa5,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db\"" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.042 [INFO][4578] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0 calico-apiserver-67f4c54f9f- calico-apiserver e2b17dac-df63-4a54-9c33-9908026d55bd 842 0 2025-12-12 17:26:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f4c54f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 calico-apiserver-67f4c54f9f-5ds7p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali248ab93f43a [] [] }} ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.042 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4604] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" HandleID="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4604] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" HandleID="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058e5b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-e-d121438740", "pod":"calico-apiserver-67f4c54f9f-5ds7p", "timestamp":"2025-12-12 17:26:59.075581531 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4604] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.121 [INFO][4604] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.121 [INFO][4604] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.186 [INFO][4604] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.196 [INFO][4604] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.204 [INFO][4604] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.207 [INFO][4604] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.210 [INFO][4604] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.210 [INFO][4604] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.212 [INFO][4604] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66 Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.217 [INFO][4604] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.228 [INFO][4604] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.196/26] block=192.168.99.192/26 handle="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.228 [INFO][4604] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.196/26] handle="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.228 [INFO][4604] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:59.259059 containerd[1692]: 2025-12-12 17:26:59.228 [INFO][4604] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.196/26] IPv6=[] ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" HandleID="k8s-pod-network.fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.260284 containerd[1692]: 2025-12-12 17:26:59.231 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0", GenerateName:"calico-apiserver-67f4c54f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2b17dac-df63-4a54-9c33-9908026d55bd", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f4c54f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"calico-apiserver-67f4c54f9f-5ds7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali248ab93f43a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:59.260284 containerd[1692]: 2025-12-12 17:26:59.231 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.196/32] ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.260284 containerd[1692]: 2025-12-12 17:26:59.231 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali248ab93f43a ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.260284 containerd[1692]: 2025-12-12 17:26:59.240 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.260284 containerd[1692]: 2025-12-12 17:26:59.241 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0", GenerateName:"calico-apiserver-67f4c54f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2b17dac-df63-4a54-9c33-9908026d55bd", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f4c54f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66", Pod:"calico-apiserver-67f4c54f9f-5ds7p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali248ab93f43a", MAC:"92:d5:78:47:24:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:59.260284 containerd[1692]: 2025-12-12 17:26:59.256 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" Namespace="calico-apiserver" Pod="calico-apiserver-67f4c54f9f-5ds7p" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--67f4c54f9f--5ds7p-eth0" Dec 12 17:26:59.262910 containerd[1692]: time="2025-12-12T17:26:59.261046713Z" level=info msg="CreateContainer within sandbox \"2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:26:59.270000 audit[4693]: NETFILTER_CFG table=filter:129 family=2 entries=45 op=nft_register_chain pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:59.270000 audit[4693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24264 a0=3 a1=ffffc6108790 a2=0 a3=ffffaac18fa8 items=0 ppid=4229 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.270000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:59.273266 containerd[1692]: time="2025-12-12T17:26:59.273186135Z" level=info msg="Container a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:26:59.284258 containerd[1692]: time="2025-12-12T17:26:59.284185831Z" level=info msg="CreateContainer within sandbox \"2f8190511072817d2bd234bb192e78bbb329b230e205de8543d4d6c8e11df7db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5\"" Dec 12 17:26:59.284987 containerd[1692]: time="2025-12-12T17:26:59.284955835Z" level=info msg="StartContainer for \"a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5\"" Dec 12 17:26:59.285947 containerd[1692]: time="2025-12-12T17:26:59.285915760Z" level=info msg="connecting to shim a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5" address="unix:///run/containerd/s/d7d01dca3416470dde1762a639ba387f96c680b215df95fd0308f35858c283bb" protocol=ttrpc version=3 Dec 12 17:26:59.305489 containerd[1692]: time="2025-12-12T17:26:59.305107817Z" level=info msg="connecting to shim fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66" address="unix:///run/containerd/s/5f92008d275d080e553e21d61bfdef9957c880163a03d9d386917a8b37e093ca" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:59.316870 systemd[1]: Started cri-containerd-a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5.scope - libcontainer container a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5. Dec 12 17:26:59.336393 systemd[1]: Started cri-containerd-fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66.scope - libcontainer container fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66. Dec 12 17:26:59.338000 audit: BPF prog-id=226 op=LOAD Dec 12 17:26:59.341399 systemd-networkd[1604]: cali001e2d7da38: Link UP Dec 12 17:26:59.341771 systemd-networkd[1604]: cali001e2d7da38: Gained carrier Dec 12 17:26:59.341000 audit: BPF prog-id=227 op=LOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.341000 audit: BPF prog-id=227 op=UNLOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.341000 audit: BPF prog-id=228 op=LOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.341000 audit: BPF prog-id=229 op=LOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.341000 audit: BPF prog-id=229 op=UNLOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.341000 audit: BPF prog-id=228 op=UNLOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.341000 audit: BPF prog-id=230 op=LOAD Dec 12 17:26:59.341000 audit[4694]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4648 pid=4694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135656638646230333465353733323235346234353537343633303766 Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.048 [INFO][4565] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0 goldmane-7c778bb748- calico-system 4c63d250-1806-4ef2-8959-7aad6322f80f 838 0 2025-12-12 17:26:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 goldmane-7c778bb748-c6kdq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali001e2d7da38 [] [] }} ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.048 [INFO][4565] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.075 [INFO][4612] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" HandleID="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Workload="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.076 [INFO][4612] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" HandleID="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Workload="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-d121438740", "pod":"goldmane-7c778bb748-c6kdq", "timestamp":"2025-12-12 17:26:59.075644091 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.076 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.229 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.229 [INFO][4612] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.287 [INFO][4612] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.295 [INFO][4612] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.308 [INFO][4612] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.313 [INFO][4612] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.315 [INFO][4612] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.315 [INFO][4612] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.321 [INFO][4612] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.327 [INFO][4612] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.335 [INFO][4612] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.197/26] block=192.168.99.192/26 handle="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.335 [INFO][4612] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.197/26] handle="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" host="ci-4515-1-0-e-d121438740" Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.335 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:26:59.363843 containerd[1692]: 2025-12-12 17:26:59.335 [INFO][4612] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.197/26] IPv6=[] ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" HandleID="k8s-pod-network.61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Workload="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.365181 containerd[1692]: 2025-12-12 17:26:59.338 [INFO][4565] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4c63d250-1806-4ef2-8959-7aad6322f80f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"goldmane-7c778bb748-c6kdq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali001e2d7da38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:59.365181 containerd[1692]: 2025-12-12 17:26:59.339 [INFO][4565] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.197/32] ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.365181 containerd[1692]: 2025-12-12 17:26:59.339 [INFO][4565] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali001e2d7da38 ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.365181 containerd[1692]: 2025-12-12 17:26:59.342 [INFO][4565] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.365181 containerd[1692]: 2025-12-12 17:26:59.342 [INFO][4565] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"4c63d250-1806-4ef2-8959-7aad6322f80f", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac", Pod:"goldmane-7c778bb748-c6kdq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.99.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali001e2d7da38", MAC:"e6:7b:f9:b1:0b:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:26:59.365181 containerd[1692]: 2025-12-12 17:26:59.359 [INFO][4565] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" Namespace="calico-system" Pod="goldmane-7c778bb748-c6kdq" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-goldmane--7c778bb748--c6kdq-eth0" Dec 12 17:26:59.368000 audit: BPF prog-id=231 op=LOAD Dec 12 17:26:59.368000 audit: BPF prog-id=232 op=LOAD Dec 12 17:26:59.368000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.368000 audit: BPF prog-id=232 op=UNLOAD Dec 12 17:26:59.368000 audit[4725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.368000 audit: BPF prog-id=233 op=LOAD Dec 12 17:26:59.368000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.368000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.369000 audit: BPF prog-id=234 op=LOAD Dec 12 17:26:59.369000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.369000 audit: BPF prog-id=234 op=UNLOAD Dec 12 17:26:59.369000 audit[4725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.369000 audit: BPF prog-id=233 op=UNLOAD Dec 12 17:26:59.369000 audit[4725]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.369000 audit: BPF prog-id=235 op=LOAD Dec 12 17:26:59.369000 audit[4725]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=4710 pid=4725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.369000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661333330656362303661306138653862636162323138376366396336 Dec 12 17:26:59.382000 audit[4771]: NETFILTER_CFG table=filter:130 family=2 entries=62 op=nft_register_chain pid=4771 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:26:59.382000 audit[4771]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31596 a0=3 a1=ffffeeedf230 a2=0 a3=ffffba069fa8 items=0 ppid=4229 pid=4771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.382000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:26:59.385573 containerd[1692]: time="2025-12-12T17:26:59.385464706Z" level=info msg="StartContainer for \"a5ef8db034e5732254b455746307fc7a64a96b6e8ff0a6ada289c670349274b5\" returns successfully" Dec 12 17:26:59.390397 containerd[1692]: time="2025-12-12T17:26:59.390346250Z" level=info msg="connecting to shim 61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac" address="unix:///run/containerd/s/2f2fbdbf271fa8115774b9591ce5bef52ecd4746b1fbd56f99cd14f7eb1c6765" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:26:59.421539 systemd[1]: Started cri-containerd-61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac.scope - libcontainer container 61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac. Dec 12 17:26:59.430025 containerd[1692]: time="2025-12-12T17:26:59.429954372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f4c54f9f-5ds7p,Uid:e2b17dac-df63-4a54-9c33-9908026d55bd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"fa330ecb06a0a8e8bcab2187cf9c61e049fbe93ba95efaaa1d4d3ece85a26f66\"" Dec 12 17:26:59.432313 containerd[1692]: time="2025-12-12T17:26:59.432272783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:26:59.437000 audit: BPF prog-id=236 op=LOAD Dec 12 17:26:59.437000 audit: BPF prog-id=237 op=LOAD Dec 12 17:26:59.437000 audit[4796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.437000 audit: BPF prog-id=237 op=UNLOAD Dec 12 17:26:59.437000 audit[4796]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.437000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.438000 audit: BPF prog-id=238 op=LOAD Dec 12 17:26:59.438000 audit[4796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.438000 audit: BPF prog-id=239 op=LOAD Dec 12 17:26:59.438000 audit[4796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.438000 audit: BPF prog-id=239 op=UNLOAD Dec 12 17:26:59.438000 audit[4796]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.438000 audit: BPF prog-id=238 op=UNLOAD Dec 12 17:26:59.438000 audit[4796]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.438000 audit: BPF prog-id=240 op=LOAD Dec 12 17:26:59.438000 audit[4796]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4784 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:26:59.438000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631656565613039393933313563373238353131373237356236343639 Dec 12 17:26:59.476916 containerd[1692]: time="2025-12-12T17:26:59.476837530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-c6kdq,Uid:4c63d250-1806-4ef2-8959-7aad6322f80f,Namespace:calico-system,Attempt:0,} returns sandbox id \"61eeea0999315c7285117275b6469e3979b46751775c46bd30a90ed05e9514ac\"" Dec 12 17:26:59.699317 systemd-networkd[1604]: cali0f89dd070ac: Gained IPv6LL Dec 12 17:26:59.758758 containerd[1692]: time="2025-12-12T17:26:59.758701842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:26:59.760052 containerd[1692]: time="2025-12-12T17:26:59.759992449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:26:59.760176 containerd[1692]: time="2025-12-12T17:26:59.760061449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:26:59.760326 kubelet[2892]: E1212 17:26:59.760289 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:59.760693 kubelet[2892]: E1212 17:26:59.760336 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:26:59.760693 kubelet[2892]: E1212 17:26:59.760513 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:26:59.760693 kubelet[2892]: E1212 17:26:59.760551 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:26:59.760927 containerd[1692]: time="2025-12-12T17:26:59.760800213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:26:59.975195 containerd[1692]: time="2025-12-12T17:26:59.975081581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849959c995-dlz6z,Uid:f42fc071-54b4-491f-b752-90f8070727e3,Namespace:calico-apiserver,Attempt:0,}" Dec 12 17:26:59.978555 containerd[1692]: time="2025-12-12T17:26:59.978487119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cw4v,Uid:131dba81-ae70-4090-a6eb-8ebf5f86d388,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:00.101492 containerd[1692]: time="2025-12-12T17:27:00.101442023Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:00.104382 containerd[1692]: time="2025-12-12T17:27:00.104279238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:00.105026 containerd[1692]: time="2025-12-12T17:27:00.104319438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:00.105466 kubelet[2892]: E1212 17:27:00.105260 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:00.105466 kubelet[2892]: E1212 17:27:00.105462 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:00.105870 kubelet[2892]: E1212 17:27:00.105820 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:00.105907 kubelet[2892]: E1212 17:27:00.105886 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:00.120717 systemd-networkd[1604]: cali3c77900ec24: Link UP Dec 12 17:27:00.122874 systemd-networkd[1604]: cali3c77900ec24: Gained carrier Dec 12 17:27:00.132811 kubelet[2892]: E1212 17:27:00.132169 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:27:00.135076 kubelet[2892]: E1212 17:27:00.134875 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:00.146191 kubelet[2892]: E1212 17:27:00.145411 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.037 [INFO][4843] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0 csi-node-driver- calico-system 131dba81-ae70-4090-a6eb-8ebf5f86d388 731 0 2025-12-12 17:26:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 csi-node-driver-5cw4v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3c77900ec24 [] [] }} ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.037 [INFO][4843] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4866] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" HandleID="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Workload="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4866] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" HandleID="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Workload="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000138780), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-d121438740", "pod":"csi-node-driver-5cw4v", "timestamp":"2025-12-12 17:27:00.063997393 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.075 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.081 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.086 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.088 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.091 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.091 [INFO][4866] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.093 [INFO][4866] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892 Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.099 [INFO][4866] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.108 [INFO][4866] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.198/26] block=192.168.99.192/26 handle="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.108 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.198/26] handle="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.108 [INFO][4866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:00.147485 containerd[1692]: 2025-12-12 17:27:00.108 [INFO][4866] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.198/26] IPv6=[] ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" HandleID="k8s-pod-network.8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Workload="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.147951 containerd[1692]: 2025-12-12 17:27:00.112 [INFO][4843] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"131dba81-ae70-4090-a6eb-8ebf5f86d388", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"csi-node-driver-5cw4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c77900ec24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:00.147951 containerd[1692]: 2025-12-12 17:27:00.112 [INFO][4843] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.198/32] ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.147951 containerd[1692]: 2025-12-12 17:27:00.112 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c77900ec24 ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.147951 containerd[1692]: 2025-12-12 17:27:00.125 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.147951 containerd[1692]: 2025-12-12 17:27:00.126 [INFO][4843] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"131dba81-ae70-4090-a6eb-8ebf5f86d388", ResourceVersion:"731", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892", Pod:"csi-node-driver-5cw4v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3c77900ec24", MAC:"4a:1f:fe:f7:4e:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:00.147951 containerd[1692]: 2025-12-12 17:27:00.140 [INFO][4843] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" Namespace="calico-system" Pod="csi-node-driver-5cw4v" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-csi--node--driver--5cw4v-eth0" Dec 12 17:27:00.164137 kubelet[2892]: I1212 17:27:00.163878 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wpvbh" podStartSLOduration=39.163861741 podStartE2EDuration="39.163861741s" podCreationTimestamp="2025-12-12 17:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:00.163622179 +0000 UTC m=+45.287675537" watchObservedRunningTime="2025-12-12 17:27:00.163861741 +0000 UTC m=+45.287915019" Dec 12 17:27:00.178000 audit[4884]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:00.178000 audit[4884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff0a49750 a2=0 a3=1 items=0 ppid=3021 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:00.184000 audit[4884]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4884 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:00.191157 containerd[1692]: time="2025-12-12T17:27:00.191060479Z" level=info msg="connecting to shim 8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892" address="unix:///run/containerd/s/5b0ebaacc9cca094aa2920c7aa7bb5617a81da97ae4e96761f39118aabdbe884" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:00.184000 audit[4884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff0a49750 a2=0 a3=1 items=0 ppid=3021 pid=4884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.184000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:00.196000 audit[4887]: NETFILTER_CFG table=filter:133 family=2 entries=54 op=nft_register_chain pid=4887 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:00.196000 audit[4887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25976 a0=3 a1=ffffe0a506a0 a2=0 a3=ffff82b89fa8 items=0 ppid=4229 pid=4887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.196000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:00.224611 systemd[1]: Started cri-containerd-8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892.scope - libcontainer container 8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892. Dec 12 17:27:00.256000 audit: BPF prog-id=241 op=LOAD Dec 12 17:27:00.257000 audit: BPF prog-id=242 op=LOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.257000 audit: BPF prog-id=242 op=UNLOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.257000 audit: BPF prog-id=243 op=LOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.257000 audit: BPF prog-id=244 op=LOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.257000 audit: BPF prog-id=244 op=UNLOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.257000 audit: BPF prog-id=243 op=UNLOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.257000 audit: BPF prog-id=245 op=LOAD Dec 12 17:27:00.257000 audit[4905]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=4895 pid=4905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.257000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3865666466313838626339363536356238613563623862373163373032 Dec 12 17:27:00.266404 systemd-networkd[1604]: cali14483585a89: Link UP Dec 12 17:27:00.266916 systemd-networkd[1604]: cali14483585a89: Gained carrier Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.029 [INFO][4836] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0 calico-apiserver-849959c995- calico-apiserver f42fc071-54b4-491f-b752-90f8070727e3 840 0 2025-12-12 17:26:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:849959c995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 calico-apiserver-849959c995-dlz6z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14483585a89 [] [] }} ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.030 [INFO][4836] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4860] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" HandleID="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4860] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" HandleID="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd0d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-e-d121438740", "pod":"calico-apiserver-849959c995-dlz6z", "timestamp":"2025-12-12 17:27:00.064578836 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.064 [INFO][4860] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.109 [INFO][4860] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.109 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.177 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.194 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.204 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.215 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.220 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.220 [INFO][4860] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.224 [INFO][4860] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704 Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.240 [INFO][4860] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.249 [INFO][4860] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.199/26] block=192.168.99.192/26 handle="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.251 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.199/26] handle="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.251 [INFO][4860] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:00.297146 containerd[1692]: 2025-12-12 17:27:00.252 [INFO][4860] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.199/26] IPv6=[] ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" HandleID="k8s-pod-network.67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Workload="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.297661 containerd[1692]: 2025-12-12 17:27:00.256 [INFO][4836] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0", GenerateName:"calico-apiserver-849959c995-", Namespace:"calico-apiserver", SelfLink:"", UID:"f42fc071-54b4-491f-b752-90f8070727e3", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849959c995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"calico-apiserver-849959c995-dlz6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14483585a89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:00.297661 containerd[1692]: 2025-12-12 17:27:00.257 [INFO][4836] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.199/32] ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.297661 containerd[1692]: 2025-12-12 17:27:00.257 [INFO][4836] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14483585a89 ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.297661 containerd[1692]: 2025-12-12 17:27:00.266 [INFO][4836] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.297661 containerd[1692]: 2025-12-12 17:27:00.267 [INFO][4836] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0", GenerateName:"calico-apiserver-849959c995-", Namespace:"calico-apiserver", SelfLink:"", UID:"f42fc071-54b4-491f-b752-90f8070727e3", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"849959c995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704", Pod:"calico-apiserver-849959c995-dlz6z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.99.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14483585a89", MAC:"ae:92:24:85:97:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:00.297661 containerd[1692]: 2025-12-12 17:27:00.292 [INFO][4836] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" Namespace="calico-apiserver" Pod="calico-apiserver-849959c995-dlz6z" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--apiserver--849959c995--dlz6z-eth0" Dec 12 17:27:00.323000 audit[4941]: NETFILTER_CFG table=filter:134 family=2 entries=49 op=nft_register_chain pid=4941 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:00.323000 audit[4941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25420 a0=3 a1=ffffe5f638f0 a2=0 a3=ffffb42bffa8 items=0 ppid=4229 pid=4941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.323000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:00.326583 containerd[1692]: time="2025-12-12T17:27:00.326550767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5cw4v,Uid:131dba81-ae70-4090-a6eb-8ebf5f86d388,Namespace:calico-system,Attempt:0,} returns sandbox id \"8efdf188bc96565b8a5cb8b71c702c8e06a57693e77d15f49a10d3f6eef01892\"" Dec 12 17:27:00.330368 containerd[1692]: time="2025-12-12T17:27:00.330291186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:00.337508 containerd[1692]: time="2025-12-12T17:27:00.337463423Z" level=info msg="connecting to shim 67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704" address="unix:///run/containerd/s/9b49e92992a12a17be63fcefcd19f25c004d9d06182aeb5dfe13609344ce7c72" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:00.358367 systemd[1]: Started cri-containerd-67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704.scope - libcontainer container 67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704. Dec 12 17:27:00.374000 audit: BPF prog-id=246 op=LOAD Dec 12 17:27:00.374000 audit: BPF prog-id=247 op=LOAD Dec 12 17:27:00.374000 audit[4961]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.375000 audit: BPF prog-id=247 op=UNLOAD Dec 12 17:27:00.375000 audit[4961]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.375000 audit: BPF prog-id=248 op=LOAD Dec 12 17:27:00.375000 audit[4961]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.375000 audit: BPF prog-id=249 op=LOAD Dec 12 17:27:00.375000 audit[4961]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.375000 audit: BPF prog-id=249 op=UNLOAD Dec 12 17:27:00.375000 audit[4961]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.375000 audit: BPF prog-id=248 op=UNLOAD Dec 12 17:27:00.375000 audit[4961]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.375000 audit: BPF prog-id=250 op=LOAD Dec 12 17:27:00.375000 audit[4961]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4950 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:00.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3637656334393531343238313136303564323462623531306666316339 Dec 12 17:27:00.410914 containerd[1692]: time="2025-12-12T17:27:00.410868356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-849959c995-dlz6z,Uid:f42fc071-54b4-491f-b752-90f8070727e3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"67ec495142811605d24bb510ff1c9f2f18cf0d9dd71232ff3098d74b8932c704\"" Dec 12 17:27:00.468241 systemd-networkd[1604]: cali001e2d7da38: Gained IPv6LL Dec 12 17:27:00.682649 containerd[1692]: time="2025-12-12T17:27:00.682425856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:00.684402 containerd[1692]: time="2025-12-12T17:27:00.684352785Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:00.684452 containerd[1692]: time="2025-12-12T17:27:00.684413746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:00.684700 kubelet[2892]: E1212 17:27:00.684647 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:00.684700 kubelet[2892]: E1212 17:27:00.684698 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:00.685096 kubelet[2892]: E1212 17:27:00.684898 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:00.685175 containerd[1692]: time="2025-12-12T17:27:00.685009149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:00.851238 systemd-networkd[1604]: cali248ab93f43a: Gained IPv6LL Dec 12 17:27:00.973738 containerd[1692]: time="2025-12-12T17:27:00.973522775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jqzql,Uid:ab177ad4-2ea1-44c1-83d1-270008bbbaa1,Namespace:kube-system,Attempt:0,}" Dec 12 17:27:01.020231 containerd[1692]: time="2025-12-12T17:27:01.020099611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:01.022699 containerd[1692]: time="2025-12-12T17:27:01.022612944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:01.022796 containerd[1692]: time="2025-12-12T17:27:01.022708825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:01.022987 kubelet[2892]: E1212 17:27:01.022894 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:01.022987 kubelet[2892]: E1212 17:27:01.022954 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:01.023773 kubelet[2892]: E1212 17:27:01.023151 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:01.023773 kubelet[2892]: E1212 17:27:01.023192 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:01.023949 containerd[1692]: time="2025-12-12T17:27:01.023522989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:01.043308 systemd-networkd[1604]: cali8c46125468f: Gained IPv6LL Dec 12 17:27:01.143710 kubelet[2892]: E1212 17:27:01.143665 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:01.146397 kubelet[2892]: E1212 17:27:01.146341 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:01.147620 kubelet[2892]: E1212 17:27:01.147586 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:27:01.158264 systemd-networkd[1604]: cali152a5798532: Link UP Dec 12 17:27:01.159807 systemd-networkd[1604]: cali152a5798532: Gained carrier Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.046 [INFO][4992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0 coredns-66bc5c9577- kube-system ab177ad4-2ea1-44c1-83d1-270008bbbaa1 836 0 2025-12-12 17:26:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 coredns-66bc5c9577-jqzql eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali152a5798532 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.046 [INFO][4992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.088 [INFO][5001] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" HandleID="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Workload="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.088 [INFO][5001] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" HandleID="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Workload="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000515e70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-e-d121438740", "pod":"coredns-66bc5c9577-jqzql", "timestamp":"2025-12-12 17:27:01.088503919 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.088 [INFO][5001] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.088 [INFO][5001] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.088 [INFO][5001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.102 [INFO][5001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.111 [INFO][5001] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.118 [INFO][5001] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.123 [INFO][5001] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.127 [INFO][5001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.127 [INFO][5001] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.129 [INFO][5001] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57 Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.136 [INFO][5001] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.148 [INFO][5001] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.200/26] block=192.168.99.192/26 handle="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.148 [INFO][5001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.200/26] handle="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.148 [INFO][5001] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:01.181267 containerd[1692]: 2025-12-12 17:27:01.148 [INFO][5001] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.200/26] IPv6=[] ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" HandleID="k8s-pod-network.7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Workload="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.181812 containerd[1692]: 2025-12-12 17:27:01.152 [INFO][4992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ab177ad4-2ea1-44c1-83d1-270008bbbaa1", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"coredns-66bc5c9577-jqzql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali152a5798532", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.181812 containerd[1692]: 2025-12-12 17:27:01.152 [INFO][4992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.200/32] ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.181812 containerd[1692]: 2025-12-12 17:27:01.152 [INFO][4992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali152a5798532 ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.181812 containerd[1692]: 2025-12-12 17:27:01.158 [INFO][4992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.181812 containerd[1692]: 2025-12-12 17:27:01.158 [INFO][4992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"ab177ad4-2ea1-44c1-83d1-270008bbbaa1", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57", Pod:"coredns-66bc5c9577-jqzql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali152a5798532", MAC:"0e:eb:a7:36:92:bf", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:01.181964 containerd[1692]: 2025-12-12 17:27:01.175 [INFO][4992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" Namespace="kube-system" Pod="coredns-66bc5c9577-jqzql" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-coredns--66bc5c9577--jqzql-eth0" Dec 12 17:27:01.221994 containerd[1692]: time="2025-12-12T17:27:01.221946597Z" level=info msg="connecting to shim 7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57" address="unix:///run/containerd/s/7b0cf0a43c71e465bc5a96843bc29d3d9a1c9dc173a91fbb09dcc52e93133eb0" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:01.226000 audit[5026]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.226000 audit[5026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd6bae5a0 a2=0 a3=1 items=0 ppid=3021 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.226000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.232000 audit[5026]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=5026 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:01.232000 audit[5026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd6bae5a0 a2=0 a3=1 items=0 ppid=3021 pid=5026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.232000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:01.253000 audit[5054]: NETFILTER_CFG table=filter:137 family=2 entries=54 op=nft_register_chain pid=5054 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:01.253000 audit[5054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25540 a0=3 a1=ffffc41025a0 a2=0 a3=ffff9ee8dfa8 items=0 ppid=4229 pid=5054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.253000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:01.262396 systemd[1]: Started cri-containerd-7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57.scope - libcontainer container 7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57. Dec 12 17:27:01.281000 audit: BPF prog-id=251 op=LOAD Dec 12 17:27:01.282000 audit: BPF prog-id=252 op=LOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.282000 audit: BPF prog-id=252 op=UNLOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.282000 audit: BPF prog-id=253 op=LOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.282000 audit: BPF prog-id=254 op=LOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.282000 audit: BPF prog-id=254 op=UNLOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.282000 audit: BPF prog-id=253 op=UNLOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.282000 audit: BPF prog-id=255 op=LOAD Dec 12 17:27:01.282000 audit[5042]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=5024 pid=5042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.282000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735393262356237363931383564663964396634353833333161373234 Dec 12 17:27:01.312249 containerd[1692]: time="2025-12-12T17:27:01.312028655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jqzql,Uid:ab177ad4-2ea1-44c1-83d1-270008bbbaa1,Namespace:kube-system,Attempt:0,} returns sandbox id \"7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57\"" Dec 12 17:27:01.318100 containerd[1692]: time="2025-12-12T17:27:01.318058405Z" level=info msg="CreateContainer within sandbox \"7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 12 17:27:01.337327 containerd[1692]: time="2025-12-12T17:27:01.337287103Z" level=info msg="Container bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:27:01.344482 containerd[1692]: time="2025-12-12T17:27:01.344432339Z" level=info msg="CreateContainer within sandbox \"7592b5b769185df9d9f458331a724f50938e40fdc02222a789c426dc3bbd8f57\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068\"" Dec 12 17:27:01.345166 containerd[1692]: time="2025-12-12T17:27:01.345106223Z" level=info msg="StartContainer for \"bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068\"" Dec 12 17:27:01.345985 containerd[1692]: time="2025-12-12T17:27:01.345956707Z" level=info msg="connecting to shim bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068" address="unix:///run/containerd/s/7b0cf0a43c71e465bc5a96843bc29d3d9a1c9dc173a91fbb09dcc52e93133eb0" protocol=ttrpc version=3 Dec 12 17:27:01.362926 containerd[1692]: time="2025-12-12T17:27:01.362562711Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:01.364466 containerd[1692]: time="2025-12-12T17:27:01.364272400Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:01.364927 kubelet[2892]: E1212 17:27:01.364792 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:01.364927 kubelet[2892]: E1212 17:27:01.364837 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:01.365223 kubelet[2892]: E1212 17:27:01.364970 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:01.365223 kubelet[2892]: E1212 17:27:01.365026 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:27:01.365315 containerd[1692]: time="2025-12-12T17:27:01.364393401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:01.368420 systemd[1]: Started cri-containerd-bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068.scope - libcontainer container bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068. Dec 12 17:27:01.383000 audit: BPF prog-id=256 op=LOAD Dec 12 17:27:01.383000 audit: BPF prog-id=257 op=LOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.383000 audit: BPF prog-id=257 op=UNLOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.383000 audit: BPF prog-id=258 op=LOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.383000 audit: BPF prog-id=259 op=LOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.383000 audit: BPF prog-id=259 op=UNLOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.383000 audit: BPF prog-id=258 op=UNLOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.383000 audit: BPF prog-id=260 op=LOAD Dec 12 17:27:01.383000 audit[5068]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=5024 pid=5068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:01.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6265653730616431623863623865616638363033653762643536363161 Dec 12 17:27:01.405154 containerd[1692]: time="2025-12-12T17:27:01.405087567Z" level=info msg="StartContainer for \"bee70ad1b8cb8eaf8603e7bd5661a7a1060e430b0ed6b83c038aaa9473f4a068\" returns successfully" Dec 12 17:27:01.811240 systemd-networkd[1604]: cali14483585a89: Gained IPv6LL Dec 12 17:27:02.131388 systemd-networkd[1604]: cali3c77900ec24: Gained IPv6LL Dec 12 17:27:02.151297 kubelet[2892]: E1212 17:27:02.151232 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:02.151634 kubelet[2892]: E1212 17:27:02.151575 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:27:02.166561 kubelet[2892]: I1212 17:27:02.165620 2892 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jqzql" podStartSLOduration=41.165604832 podStartE2EDuration="41.165604832s" podCreationTimestamp="2025-12-12 17:26:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 17:27:02.16531231 +0000 UTC m=+47.289365668" watchObservedRunningTime="2025-12-12 17:27:02.165604832 +0000 UTC m=+47.289658110" Dec 12 17:27:02.254000 audit[5103]: NETFILTER_CFG table=filter:138 family=2 entries=14 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.254000 audit[5103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff4c533a0 a2=0 a3=1 items=0 ppid=3021 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.265000 audit[5103]: NETFILTER_CFG table=nat:139 family=2 entries=44 op=nft_register_rule pid=5103 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:02.265000 audit[5103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff4c533a0 a2=0 a3=1 items=0 ppid=3021 pid=5103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:02.265000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:02.707306 systemd-networkd[1604]: cali152a5798532: Gained IPv6LL Dec 12 17:27:02.972973 containerd[1692]: time="2025-12-12T17:27:02.972145010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db4fd4bfb-cz8pj,Uid:9edb9d6f-c2b4-4544-aec2-09be399b44d1,Namespace:calico-system,Attempt:0,}" Dec 12 17:27:03.090322 systemd-networkd[1604]: calif5b4fa5563c: Link UP Dec 12 17:27:03.091661 systemd-networkd[1604]: calif5b4fa5563c: Gained carrier Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.018 [INFO][5110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0 calico-kube-controllers-7db4fd4bfb- calico-system 9edb9d6f-c2b4-4544-aec2-09be399b44d1 835 0 2025-12-12 17:26:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7db4fd4bfb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-e-d121438740 calico-kube-controllers-7db4fd4bfb-cz8pj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif5b4fa5563c [] [] }} ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.018 [INFO][5110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.045 [INFO][5125] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" HandleID="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Workload="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.045 [INFO][5125] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" HandleID="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Workload="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-e-d121438740", "pod":"calico-kube-controllers-7db4fd4bfb-cz8pj", "timestamp":"2025-12-12 17:27:03.045190221 +0000 UTC"}, Hostname:"ci-4515-1-0-e-d121438740", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.045 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.045 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.045 [INFO][5125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-e-d121438740' Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.054 [INFO][5125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.060 [INFO][5125] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.065 [INFO][5125] ipam/ipam.go 511: Trying affinity for 192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.067 [INFO][5125] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.070 [INFO][5125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.192/26 host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.070 [INFO][5125] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.99.192/26 handle="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.071 [INFO][5125] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01 Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.077 [INFO][5125] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.99.192/26 handle="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.084 [INFO][5125] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.99.201/26] block=192.168.99.192/26 handle="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.084 [INFO][5125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.201/26] handle="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" host="ci-4515-1-0-e-d121438740" Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.084 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Dec 12 17:27:03.108354 containerd[1692]: 2025-12-12 17:27:03.085 [INFO][5125] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.99.201/26] IPv6=[] ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" HandleID="k8s-pod-network.7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Workload="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.108875 containerd[1692]: 2025-12-12 17:27:03.087 [INFO][5110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0", GenerateName:"calico-kube-controllers-7db4fd4bfb-", Namespace:"calico-system", SelfLink:"", UID:"9edb9d6f-c2b4-4544-aec2-09be399b44d1", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db4fd4bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"", Pod:"calico-kube-controllers-7db4fd4bfb-cz8pj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b4fa5563c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:03.108875 containerd[1692]: 2025-12-12 17:27:03.088 [INFO][5110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.201/32] ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.108875 containerd[1692]: 2025-12-12 17:27:03.088 [INFO][5110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5b4fa5563c ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.108875 containerd[1692]: 2025-12-12 17:27:03.091 [INFO][5110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.108875 containerd[1692]: 2025-12-12 17:27:03.094 [INFO][5110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0", GenerateName:"calico-kube-controllers-7db4fd4bfb-", Namespace:"calico-system", SelfLink:"", UID:"9edb9d6f-c2b4-4544-aec2-09be399b44d1", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.December, 12, 17, 26, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db4fd4bfb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-e-d121438740", ContainerID:"7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01", Pod:"calico-kube-controllers-7db4fd4bfb-cz8pj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b4fa5563c", MAC:"1a:07:92:e4:dd:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Dec 12 17:27:03.108875 containerd[1692]: 2025-12-12 17:27:03.103 [INFO][5110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" Namespace="calico-system" Pod="calico-kube-controllers-7db4fd4bfb-cz8pj" WorkloadEndpoint="ci--4515--1--0--e--d121438740-k8s-calico--kube--controllers--7db4fd4bfb--cz8pj-eth0" Dec 12 17:27:03.116000 audit[5142]: NETFILTER_CFG table=filter:140 family=2 entries=52 op=nft_register_chain pid=5142 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:03.120779 kernel: kauditd_printk_skb: 233 callbacks suppressed Dec 12 17:27:03.121230 kernel: audit: type=1325 audit(1765560423.116:751): table=filter:140 family=2 entries=52 op=nft_register_chain pid=5142 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 12 17:27:03.116000 audit[5142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24280 a0=3 a1=fffffb0e1eb0 a2=0 a3=ffff81912fa8 items=0 ppid=4229 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.127682 kernel: audit: type=1300 audit(1765560423.116:751): arch=c00000b7 syscall=211 success=yes exit=24280 a0=3 a1=fffffb0e1eb0 a2=0 a3=ffff81912fa8 items=0 ppid=4229 pid=5142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.116000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:03.133920 kernel: audit: type=1327 audit(1765560423.116:751): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 12 17:27:03.136336 containerd[1692]: time="2025-12-12T17:27:03.136285204Z" level=info msg="connecting to shim 7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01" address="unix:///run/containerd/s/42cafe6632d2d431ea172c70a55dd45098517f8fe58ea6ff62ef8d650aaab352" namespace=k8s.io protocol=ttrpc version=3 Dec 12 17:27:03.159379 systemd[1]: Started cri-containerd-7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01.scope - libcontainer container 7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01. Dec 12 17:27:03.184000 audit: BPF prog-id=261 op=LOAD Dec 12 17:27:03.184000 audit: BPF prog-id=262 op=LOAD Dec 12 17:27:03.184000 audit[5162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.190344 kernel: audit: type=1334 audit(1765560423.184:752): prog-id=261 op=LOAD Dec 12 17:27:03.190447 kernel: audit: type=1334 audit(1765560423.184:753): prog-id=262 op=LOAD Dec 12 17:27:03.190471 kernel: audit: type=1300 audit(1765560423.184:753): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.184000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.193840 kernel: audit: type=1327 audit(1765560423.184:753): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.185000 audit: BPF prog-id=262 op=UNLOAD Dec 12 17:27:03.195139 kernel: audit: type=1334 audit(1765560423.185:754): prog-id=262 op=UNLOAD Dec 12 17:27:03.185000 audit[5162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.199080 kernel: audit: type=1300 audit(1765560423.185:754): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.185000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.203971 kernel: audit: type=1327 audit(1765560423.185:754): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.186000 audit: BPF prog-id=263 op=LOAD Dec 12 17:27:03.186000 audit[5162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.189000 audit: BPF prog-id=264 op=LOAD Dec 12 17:27:03.189000 audit[5162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.189000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.193000 audit: BPF prog-id=264 op=UNLOAD Dec 12 17:27:03.193000 audit[5162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.193000 audit: BPF prog-id=263 op=UNLOAD Dec 12 17:27:03.193000 audit[5162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.193000 audit: BPF prog-id=265 op=LOAD Dec 12 17:27:03.193000 audit[5162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=5151 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3735363661656161653133633734643966353037396334316436653339 Dec 12 17:27:03.229855 containerd[1692]: time="2025-12-12T17:27:03.229738039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db4fd4bfb-cz8pj,Uid:9edb9d6f-c2b4-4544-aec2-09be399b44d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7566aeaae13c74d9f5079c41d6e39f3cecc1e5ee1b449b6ba5d10d7995fecc01\"" Dec 12 17:27:03.234764 containerd[1692]: time="2025-12-12T17:27:03.234722224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:03.294000 audit[5189]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.294000 audit[5189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeabe3990 a2=0 a3=1 items=0 ppid=3021 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.305000 audit[5189]: NETFILTER_CFG table=nat:142 family=2 entries=56 op=nft_register_chain pid=5189 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:27:03.305000 audit[5189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffeabe3990 a2=0 a3=1 items=0 ppid=3021 pid=5189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:27:03.305000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:27:03.562474 containerd[1692]: time="2025-12-12T17:27:03.562181968Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:03.563521 containerd[1692]: time="2025-12-12T17:27:03.563484134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:03.563679 containerd[1692]: time="2025-12-12T17:27:03.563544415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:03.563906 kubelet[2892]: E1212 17:27:03.563846 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:03.563906 kubelet[2892]: E1212 17:27:03.563898 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:03.564372 kubelet[2892]: E1212 17:27:03.563977 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:03.564372 kubelet[2892]: E1212 17:27:03.564009 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:04.164134 kubelet[2892]: E1212 17:27:04.163011 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:04.372268 systemd-networkd[1604]: calif5b4fa5563c: Gained IPv6LL Dec 12 17:27:05.164590 kubelet[2892]: E1212 17:27:05.164530 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:07.970506 containerd[1692]: time="2025-12-12T17:27:07.970110485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:08.299480 containerd[1692]: time="2025-12-12T17:27:08.299183717Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:08.300837 containerd[1692]: time="2025-12-12T17:27:08.300782765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:08.301041 containerd[1692]: time="2025-12-12T17:27:08.300833805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:08.301284 kubelet[2892]: E1212 17:27:08.301245 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:08.301584 kubelet[2892]: E1212 17:27:08.301292 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:08.301584 kubelet[2892]: E1212 17:27:08.301356 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:08.303846 containerd[1692]: time="2025-12-12T17:27:08.303814581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:08.638569 containerd[1692]: time="2025-12-12T17:27:08.638415481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:08.639870 containerd[1692]: time="2025-12-12T17:27:08.639807248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:08.639943 containerd[1692]: time="2025-12-12T17:27:08.639892728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:08.640104 kubelet[2892]: E1212 17:27:08.640050 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:08.640104 kubelet[2892]: E1212 17:27:08.640101 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:08.640260 kubelet[2892]: E1212 17:27:08.640186 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:08.640260 kubelet[2892]: E1212 17:27:08.640221 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:27:10.969862 containerd[1692]: time="2025-12-12T17:27:10.969690446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:11.316555 containerd[1692]: time="2025-12-12T17:27:11.316506368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:11.318862 containerd[1692]: time="2025-12-12T17:27:11.318775020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:11.318993 containerd[1692]: time="2025-12-12T17:27:11.318819780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:11.319177 kubelet[2892]: E1212 17:27:11.319140 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:11.321411 kubelet[2892]: E1212 17:27:11.321219 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:11.321411 kubelet[2892]: E1212 17:27:11.321362 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:11.321579 kubelet[2892]: E1212 17:27:11.321394 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:27:11.970055 containerd[1692]: time="2025-12-12T17:27:11.969753407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:12.296550 containerd[1692]: time="2025-12-12T17:27:12.296402067Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:12.297896 containerd[1692]: time="2025-12-12T17:27:12.297859115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:12.298041 containerd[1692]: time="2025-12-12T17:27:12.297938315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:12.298130 kubelet[2892]: E1212 17:27:12.298066 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:12.298190 kubelet[2892]: E1212 17:27:12.298141 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:12.298236 kubelet[2892]: E1212 17:27:12.298218 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:12.298270 kubelet[2892]: E1212 17:27:12.298252 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:13.970203 containerd[1692]: time="2025-12-12T17:27:13.970164732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:14.306562 containerd[1692]: time="2025-12-12T17:27:14.306479921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:14.307837 containerd[1692]: time="2025-12-12T17:27:14.307802607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:14.307908 containerd[1692]: time="2025-12-12T17:27:14.307838247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:14.308065 kubelet[2892]: E1212 17:27:14.308013 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:14.308347 kubelet[2892]: E1212 17:27:14.308074 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:14.308347 kubelet[2892]: E1212 17:27:14.308157 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:14.308347 kubelet[2892]: E1212 17:27:14.308192 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:15.969932 containerd[1692]: time="2025-12-12T17:27:15.969880012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:16.301737 containerd[1692]: time="2025-12-12T17:27:16.301689738Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:16.303551 containerd[1692]: time="2025-12-12T17:27:16.303500347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:16.303634 containerd[1692]: time="2025-12-12T17:27:16.303543308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:16.303738 kubelet[2892]: E1212 17:27:16.303705 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:16.304146 kubelet[2892]: E1212 17:27:16.303748 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:16.304360 kubelet[2892]: E1212 17:27:16.304255 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:16.304519 kubelet[2892]: E1212 17:27:16.304296 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:27:16.304578 containerd[1692]: time="2025-12-12T17:27:16.304542153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:16.819946 containerd[1692]: time="2025-12-12T17:27:16.819770251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:16.821313 containerd[1692]: time="2025-12-12T17:27:16.821277618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:16.821412 containerd[1692]: time="2025-12-12T17:27:16.821358219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:16.821581 kubelet[2892]: E1212 17:27:16.821534 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:16.821635 kubelet[2892]: E1212 17:27:16.821591 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:16.821709 kubelet[2892]: E1212 17:27:16.821690 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:16.822676 containerd[1692]: time="2025-12-12T17:27:16.822655305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:17.142857 containerd[1692]: time="2025-12-12T17:27:17.142741612Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:17.144396 containerd[1692]: time="2025-12-12T17:27:17.144330500Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:17.144612 kubelet[2892]: E1212 17:27:17.144555 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:17.144658 kubelet[2892]: E1212 17:27:17.144619 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:17.144766 kubelet[2892]: E1212 17:27:17.144746 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:17.144816 kubelet[2892]: E1212 17:27:17.144785 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:27:17.145051 containerd[1692]: time="2025-12-12T17:27:17.144417060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:18.970540 containerd[1692]: time="2025-12-12T17:27:18.970301298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:18.971807 kubelet[2892]: E1212 17:27:18.970959 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:27:19.299524 containerd[1692]: time="2025-12-12T17:27:19.299473610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:19.300815 containerd[1692]: time="2025-12-12T17:27:19.300768857Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:19.301047 containerd[1692]: time="2025-12-12T17:27:19.300849537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:19.301128 kubelet[2892]: E1212 17:27:19.301057 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:19.301128 kubelet[2892]: E1212 17:27:19.301108 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:19.301221 kubelet[2892]: E1212 17:27:19.301203 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:19.301253 kubelet[2892]: E1212 17:27:19.301238 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:25.969156 kubelet[2892]: E1212 17:27:25.969095 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:26.970520 kubelet[2892]: E1212 17:27:26.970375 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:26.970520 kubelet[2892]: E1212 17:27:26.970467 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:27:28.970028 kubelet[2892]: E1212 17:27:28.969672 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:27:29.970025 kubelet[2892]: E1212 17:27:29.969935 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:27:31.969674 kubelet[2892]: E1212 17:27:31.969463 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:33.969220 containerd[1692]: time="2025-12-12T17:27:33.969172147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:27:34.310887 containerd[1692]: time="2025-12-12T17:27:34.310791763Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:34.312361 containerd[1692]: time="2025-12-12T17:27:34.312326531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:27:34.312487 containerd[1692]: time="2025-12-12T17:27:34.312357731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:34.312567 kubelet[2892]: E1212 17:27:34.312528 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:34.312989 kubelet[2892]: E1212 17:27:34.312576 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:27:34.312989 kubelet[2892]: E1212 17:27:34.312650 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:34.313611 containerd[1692]: time="2025-12-12T17:27:34.313586817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:27:34.655750 containerd[1692]: time="2025-12-12T17:27:34.655512795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:34.657268 containerd[1692]: time="2025-12-12T17:27:34.657141603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:27:34.657268 containerd[1692]: time="2025-12-12T17:27:34.657205323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:34.657429 kubelet[2892]: E1212 17:27:34.657370 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:34.657429 kubelet[2892]: E1212 17:27:34.657420 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:27:34.657520 kubelet[2892]: E1212 17:27:34.657500 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:34.657565 kubelet[2892]: E1212 17:27:34.657544 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:27:36.971052 containerd[1692]: time="2025-12-12T17:27:36.971008200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:37.306720 containerd[1692]: time="2025-12-12T17:27:37.306654745Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:37.308201 containerd[1692]: time="2025-12-12T17:27:37.308154113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:37.308282 containerd[1692]: time="2025-12-12T17:27:37.308219153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:37.308481 kubelet[2892]: E1212 17:27:37.308433 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:37.308481 kubelet[2892]: E1212 17:27:37.308480 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:37.308809 kubelet[2892]: E1212 17:27:37.308550 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:37.308809 kubelet[2892]: E1212 17:27:37.308582 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:40.970812 containerd[1692]: time="2025-12-12T17:27:40.970765923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:27:41.303182 containerd[1692]: time="2025-12-12T17:27:41.303087611Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:41.306760 containerd[1692]: time="2025-12-12T17:27:41.306703990Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:27:41.306899 containerd[1692]: time="2025-12-12T17:27:41.306761750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:41.311448 kubelet[2892]: E1212 17:27:41.311200 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:41.311448 kubelet[2892]: E1212 17:27:41.311270 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:27:41.311808 kubelet[2892]: E1212 17:27:41.311450 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:41.311808 kubelet[2892]: E1212 17:27:41.311494 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:41.311870 containerd[1692]: time="2025-12-12T17:27:41.311610815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:41.636986 containerd[1692]: time="2025-12-12T17:27:41.636851427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:41.638607 containerd[1692]: time="2025-12-12T17:27:41.638543196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:41.638716 containerd[1692]: time="2025-12-12T17:27:41.638639116Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:41.638913 kubelet[2892]: E1212 17:27:41.638876 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:41.639020 kubelet[2892]: E1212 17:27:41.639001 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:41.639172 kubelet[2892]: E1212 17:27:41.639150 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:41.639271 kubelet[2892]: E1212 17:27:41.639249 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:27:42.969793 containerd[1692]: time="2025-12-12T17:27:42.969178397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:27:43.307617 containerd[1692]: time="2025-12-12T17:27:43.307552996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:43.309146 containerd[1692]: time="2025-12-12T17:27:43.309051364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:27:43.309250 containerd[1692]: time="2025-12-12T17:27:43.309106244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:43.309432 kubelet[2892]: E1212 17:27:43.309278 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:43.309432 kubelet[2892]: E1212 17:27:43.309321 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:27:43.309432 kubelet[2892]: E1212 17:27:43.309389 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:43.310193 containerd[1692]: time="2025-12-12T17:27:43.310168889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:27:43.837574 containerd[1692]: time="2025-12-12T17:27:43.837425648Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:43.838794 containerd[1692]: time="2025-12-12T17:27:43.838755415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:27:43.838861 containerd[1692]: time="2025-12-12T17:27:43.838830015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:43.839029 kubelet[2892]: E1212 17:27:43.838960 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:43.839079 kubelet[2892]: E1212 17:27:43.839030 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:27:43.839582 kubelet[2892]: E1212 17:27:43.839097 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:43.839636 kubelet[2892]: E1212 17:27:43.839613 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:27:43.969316 containerd[1692]: time="2025-12-12T17:27:43.969273478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:27:44.309137 containerd[1692]: time="2025-12-12T17:27:44.308988204Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:44.311099 containerd[1692]: time="2025-12-12T17:27:44.310902854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:27:44.311099 containerd[1692]: time="2025-12-12T17:27:44.310976014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:44.311240 kubelet[2892]: E1212 17:27:44.311143 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:44.311240 kubelet[2892]: E1212 17:27:44.311190 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:27:44.311497 kubelet[2892]: E1212 17:27:44.311302 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:44.311497 kubelet[2892]: E1212 17:27:44.311346 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:27:44.970849 containerd[1692]: time="2025-12-12T17:27:44.970692367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:27:45.302089 containerd[1692]: time="2025-12-12T17:27:45.301985730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:27:45.303243 containerd[1692]: time="2025-12-12T17:27:45.303199456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:27:45.303327 containerd[1692]: time="2025-12-12T17:27:45.303291696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:27:45.303528 kubelet[2892]: E1212 17:27:45.303480 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:45.303528 kubelet[2892]: E1212 17:27:45.303527 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:27:45.303646 kubelet[2892]: E1212 17:27:45.303606 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:27:45.303679 kubelet[2892]: E1212 17:27:45.303661 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:47.970129 kubelet[2892]: E1212 17:27:47.970082 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:27:49.970383 kubelet[2892]: E1212 17:27:49.970129 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:27:54.971810 kubelet[2892]: E1212 17:27:54.971765 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:27:56.972945 kubelet[2892]: E1212 17:27:56.972248 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:27:56.972945 kubelet[2892]: E1212 17:27:56.972729 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:27:57.970812 kubelet[2892]: E1212 17:27:57.970605 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:27:58.974523 kubelet[2892]: E1212 17:27:58.974434 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:27:58.974523 kubelet[2892]: E1212 17:27:58.974514 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:28:00.969945 kubelet[2892]: E1212 17:28:00.969586 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:28:08.969446 kubelet[2892]: E1212 17:28:08.969311 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:28:09.969823 kubelet[2892]: E1212 17:28:09.969690 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:28:09.969823 kubelet[2892]: E1212 17:28:09.969770 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:28:09.970752 kubelet[2892]: E1212 17:28:09.970714 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:28:10.970253 kubelet[2892]: E1212 17:28:10.970089 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:28:11.970619 kubelet[2892]: E1212 17:28:11.970572 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:28:14.969937 kubelet[2892]: E1212 17:28:14.969886 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:28:21.971499 kubelet[2892]: E1212 17:28:21.971441 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:28:21.971933 containerd[1692]: time="2025-12-12T17:28:21.971631289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:22.314243 containerd[1692]: time="2025-12-12T17:28:22.314181029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:22.315517 containerd[1692]: time="2025-12-12T17:28:22.315467556Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:22.315578 containerd[1692]: time="2025-12-12T17:28:22.315506676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:22.315725 kubelet[2892]: E1212 17:28:22.315680 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:22.315778 kubelet[2892]: E1212 17:28:22.315725 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:22.315984 kubelet[2892]: E1212 17:28:22.315889 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:22.315984 kubelet[2892]: E1212 17:28:22.315967 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:28:22.316247 containerd[1692]: time="2025-12-12T17:28:22.316053199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:28:22.676147 containerd[1692]: time="2025-12-12T17:28:22.675971068Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:22.677343 containerd[1692]: time="2025-12-12T17:28:22.677294514Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:28:22.677343 containerd[1692]: time="2025-12-12T17:28:22.677371995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:22.677562 kubelet[2892]: E1212 17:28:22.677503 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:22.677562 kubelet[2892]: E1212 17:28:22.677540 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:28:22.677625 kubelet[2892]: E1212 17:28:22.677601 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:22.677732 kubelet[2892]: E1212 17:28:22.677631 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:28:23.970150 containerd[1692]: time="2025-12-12T17:28:23.970039243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:28:24.300474 containerd[1692]: time="2025-12-12T17:28:24.300429802Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:24.301625 containerd[1692]: time="2025-12-12T17:28:24.301593727Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:28:24.301625 containerd[1692]: time="2025-12-12T17:28:24.301653768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:24.301934 kubelet[2892]: E1212 17:28:24.301896 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:24.302420 kubelet[2892]: E1212 17:28:24.302260 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:28:24.302569 kubelet[2892]: E1212 17:28:24.302440 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:24.303021 containerd[1692]: time="2025-12-12T17:28:24.302652893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:28:24.664956 containerd[1692]: time="2025-12-12T17:28:24.664827493Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:24.666425 containerd[1692]: time="2025-12-12T17:28:24.666378261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:28:24.666509 containerd[1692]: time="2025-12-12T17:28:24.666418781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:24.666612 kubelet[2892]: E1212 17:28:24.666574 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:24.666670 kubelet[2892]: E1212 17:28:24.666618 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:28:24.666794 kubelet[2892]: E1212 17:28:24.666767 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:24.668142 containerd[1692]: time="2025-12-12T17:28:24.667228105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:28:24.971762 kubelet[2892]: E1212 17:28:24.971341 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:28:25.011015 containerd[1692]: time="2025-12-12T17:28:25.010960692Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:25.012694 containerd[1692]: time="2025-12-12T17:28:25.012624140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:28:25.012787 containerd[1692]: time="2025-12-12T17:28:25.012658460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:25.012894 kubelet[2892]: E1212 17:28:25.012850 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:25.012956 kubelet[2892]: E1212 17:28:25.012902 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:28:25.013128 kubelet[2892]: E1212 17:28:25.013080 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:25.013332 containerd[1692]: time="2025-12-12T17:28:25.013310824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:28:25.013372 kubelet[2892]: E1212 17:28:25.013179 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:28:25.342000 containerd[1692]: time="2025-12-12T17:28:25.341881413Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:25.343382 containerd[1692]: time="2025-12-12T17:28:25.343340221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:28:25.343580 containerd[1692]: time="2025-12-12T17:28:25.343415421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:25.343613 kubelet[2892]: E1212 17:28:25.343571 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:25.344372 kubelet[2892]: E1212 17:28:25.343617 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:28:25.344372 kubelet[2892]: E1212 17:28:25.343698 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:25.344372 kubelet[2892]: E1212 17:28:25.343737 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:28:25.970786 containerd[1692]: time="2025-12-12T17:28:25.970642488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:26.133110 systemd[1]: Started sshd@7-10.0.11.71:22-139.178.89.65:40514.service - OpenSSH per-connection server daemon (139.178.89.65:40514). Dec 12 17:28:26.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.11.71:22-139.178.89.65:40514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.134589 kernel: kauditd_printk_skb: 21 callbacks suppressed Dec 12 17:28:26.134664 kernel: audit: type=1130 audit(1765560506.132:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.11.71:22-139.178.89.65:40514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:26.315141 containerd[1692]: time="2025-12-12T17:28:26.315026078Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:26.317259 containerd[1692]: time="2025-12-12T17:28:26.317190809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:26.317398 containerd[1692]: time="2025-12-12T17:28:26.317292809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:26.317501 kubelet[2892]: E1212 17:28:26.317464 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:26.317571 kubelet[2892]: E1212 17:28:26.317510 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:26.317609 kubelet[2892]: E1212 17:28:26.317587 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:26.317641 kubelet[2892]: E1212 17:28:26.317622 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:28:26.959000 audit[5324]: USER_ACCT pid=5324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.961198 sshd[5324]: Accepted publickey for core from 139.178.89.65 port 40514 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:26.963800 sshd-session[5324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:26.962000 audit[5324]: CRED_ACQ pid=5324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.968551 kernel: audit: type=1101 audit(1765560506.959:763): pid=5324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.968618 kernel: audit: type=1103 audit(1765560506.962:764): pid=5324 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.968642 kernel: audit: type=1006 audit(1765560506.962:765): pid=5324 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 12 17:28:26.962000 audit[5324]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffab41d40 a2=3 a3=0 items=0 ppid=1 pid=5324 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.972657 systemd-logind[1666]: New session 8 of user core. Dec 12 17:28:26.974597 kernel: audit: type=1300 audit(1765560506.962:765): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffab41d40 a2=3 a3=0 items=0 ppid=1 pid=5324 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:26.974689 kernel: audit: type=1327 audit(1765560506.962:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:26.962000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:26.984368 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 12 17:28:26.985000 audit[5324]: USER_START pid=5324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.990000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.995077 kernel: audit: type=1105 audit(1765560506.985:766): pid=5324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:26.995260 kernel: audit: type=1103 audit(1765560506.990:767): pid=5341 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.525022 sshd[5341]: Connection closed by 139.178.89.65 port 40514 Dec 12 17:28:27.525504 sshd-session[5324]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:27.526000 audit[5324]: USER_END pid=5324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.529756 systemd-logind[1666]: Session 8 logged out. Waiting for processes to exit. Dec 12 17:28:27.529893 systemd[1]: sshd@7-10.0.11.71:22-139.178.89.65:40514.service: Deactivated successfully. Dec 12 17:28:27.526000 audit[5324]: CRED_DISP pid=5324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.532195 systemd[1]: session-8.scope: Deactivated successfully. Dec 12 17:28:27.533641 kernel: audit: type=1106 audit(1765560507.526:768): pid=5324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.533825 kernel: audit: type=1104 audit(1765560507.526:769): pid=5324 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:27.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.11.71:22-139.178.89.65:40514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:27.534044 systemd-logind[1666]: Removed session 8. Dec 12 17:28:32.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.11.71:22-139.178.89.65:39954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:32.693955 systemd[1]: Started sshd@8-10.0.11.71:22-139.178.89.65:39954.service - OpenSSH per-connection server daemon (139.178.89.65:39954). Dec 12 17:28:32.697339 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:32.697408 kernel: audit: type=1130 audit(1765560512.692:771): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.11.71:22-139.178.89.65:39954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:32.969715 kubelet[2892]: E1212 17:28:32.969585 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:28:33.527000 audit[5364]: USER_ACCT pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.528633 sshd[5364]: Accepted publickey for core from 139.178.89.65 port 39954 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:33.530529 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:33.529000 audit[5364]: CRED_ACQ pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.535175 kernel: audit: type=1101 audit(1765560513.527:772): pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.535241 kernel: audit: type=1103 audit(1765560513.529:773): pid=5364 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.537301 kernel: audit: type=1006 audit(1765560513.529:774): pid=5364 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 12 17:28:33.537434 kernel: audit: type=1300 audit(1765560513.529:774): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe32d7420 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:33.529000 audit[5364]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe32d7420 a2=3 a3=0 items=0 ppid=1 pid=5364 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:33.539969 systemd-logind[1666]: New session 9 of user core. Dec 12 17:28:33.529000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:33.541684 kernel: audit: type=1327 audit(1765560513.529:774): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:33.555601 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 12 17:28:33.557000 audit[5364]: USER_START pid=5364 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.558000 audit[5367]: CRED_ACQ pid=5367 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.564821 kernel: audit: type=1105 audit(1765560513.557:775): pid=5364 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.564886 kernel: audit: type=1103 audit(1765560513.558:776): pid=5367 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:33.969326 kubelet[2892]: E1212 17:28:33.969201 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:28:33.970328 containerd[1692]: time="2025-12-12T17:28:33.970252369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:28:34.064251 sshd[5367]: Connection closed by 139.178.89.65 port 39954 Dec 12 17:28:34.064776 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:34.065000 audit[5364]: USER_END pid=5364 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:34.069364 systemd[1]: sshd@8-10.0.11.71:22-139.178.89.65:39954.service: Deactivated successfully. Dec 12 17:28:34.065000 audit[5364]: CRED_DISP pid=5364 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:34.071889 systemd[1]: session-9.scope: Deactivated successfully. Dec 12 17:28:34.073669 kernel: audit: type=1106 audit(1765560514.065:777): pid=5364 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:34.073734 kernel: audit: type=1104 audit(1765560514.065:778): pid=5364 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:34.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.11.71:22-139.178.89.65:39954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:34.073687 systemd-logind[1666]: Session 9 logged out. Waiting for processes to exit. Dec 12 17:28:34.075469 systemd-logind[1666]: Removed session 9. Dec 12 17:28:34.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.11.71:22-139.178.89.65:39970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:34.238597 systemd[1]: Started sshd@9-10.0.11.71:22-139.178.89.65:39970.service - OpenSSH per-connection server daemon (139.178.89.65:39970). Dec 12 17:28:34.277605 containerd[1692]: time="2025-12-12T17:28:34.277549890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:34.278932 containerd[1692]: time="2025-12-12T17:28:34.278881297Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:28:34.279657 kubelet[2892]: E1212 17:28:34.279090 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:34.279657 kubelet[2892]: E1212 17:28:34.279146 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:28:34.279657 kubelet[2892]: E1212 17:28:34.279215 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:34.279657 kubelet[2892]: E1212 17:28:34.279242 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:28:34.280752 containerd[1692]: time="2025-12-12T17:28:34.278962497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:35.073000 audit[5381]: USER_ACCT pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.075339 sshd[5381]: Accepted publickey for core from 139.178.89.65 port 39970 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:35.075000 audit[5381]: CRED_ACQ pid=5381 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.075000 audit[5381]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff2416540 a2=3 a3=0 items=0 ppid=1 pid=5381 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:35.075000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:35.076968 sshd-session[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:35.081163 systemd-logind[1666]: New session 10 of user core. Dec 12 17:28:35.090344 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 12 17:28:35.092000 audit[5381]: USER_START pid=5381 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.093000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.644791 sshd[5384]: Connection closed by 139.178.89.65 port 39970 Dec 12 17:28:35.643715 sshd-session[5381]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:35.643000 audit[5381]: USER_END pid=5381 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.643000 audit[5381]: CRED_DISP pid=5381 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:35.647462 systemd[1]: sshd@9-10.0.11.71:22-139.178.89.65:39970.service: Deactivated successfully. Dec 12 17:28:35.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.11.71:22-139.178.89.65:39970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:35.649567 systemd[1]: session-10.scope: Deactivated successfully. Dec 12 17:28:35.651423 systemd-logind[1666]: Session 10 logged out. Waiting for processes to exit. Dec 12 17:28:35.654601 systemd-logind[1666]: Removed session 10. Dec 12 17:28:35.808560 systemd[1]: Started sshd@10-10.0.11.71:22-139.178.89.65:39986.service - OpenSSH per-connection server daemon (139.178.89.65:39986). Dec 12 17:28:35.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.11.71:22-139.178.89.65:39986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:35.970790 kubelet[2892]: E1212 17:28:35.970562 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:28:36.628404 sshd[5396]: Accepted publickey for core from 139.178.89.65 port 39986 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:36.627000 audit[5396]: USER_ACCT pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:36.629000 audit[5396]: CRED_ACQ pid=5396 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:36.629000 audit[5396]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8f2c720 a2=3 a3=0 items=0 ppid=1 pid=5396 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:36.629000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:36.630773 sshd-session[5396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:36.636191 systemd-logind[1666]: New session 11 of user core. Dec 12 17:28:36.642348 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 12 17:28:36.643000 audit[5396]: USER_START pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:36.646000 audit[5399]: CRED_ACQ pid=5399 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:37.169236 sshd[5399]: Connection closed by 139.178.89.65 port 39986 Dec 12 17:28:37.170590 sshd-session[5396]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:37.170000 audit[5396]: USER_END pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:37.171000 audit[5396]: CRED_DISP pid=5396 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:37.175970 systemd[1]: sshd@10-10.0.11.71:22-139.178.89.65:39986.service: Deactivated successfully. Dec 12 17:28:37.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.11.71:22-139.178.89.65:39986 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:37.178224 systemd[1]: session-11.scope: Deactivated successfully. Dec 12 17:28:37.179424 systemd-logind[1666]: Session 11 logged out. Waiting for processes to exit. Dec 12 17:28:37.181091 systemd-logind[1666]: Removed session 11. Dec 12 17:28:37.969556 kubelet[2892]: E1212 17:28:37.969501 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:28:39.969338 containerd[1692]: time="2025-12-12T17:28:39.969291600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:28:40.312623 containerd[1692]: time="2025-12-12T17:28:40.312564584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:28:40.313697 containerd[1692]: time="2025-12-12T17:28:40.313645110Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:28:40.313788 containerd[1692]: time="2025-12-12T17:28:40.313726270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:28:40.314106 kubelet[2892]: E1212 17:28:40.313894 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:40.314106 kubelet[2892]: E1212 17:28:40.313948 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:28:40.314106 kubelet[2892]: E1212 17:28:40.314025 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:28:40.314106 kubelet[2892]: E1212 17:28:40.314056 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:28:40.973430 kubelet[2892]: E1212 17:28:40.973383 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:28:42.356781 systemd[1]: Started sshd@11-10.0.11.71:22-139.178.89.65:39584.service - OpenSSH per-connection server daemon (139.178.89.65:39584). Dec 12 17:28:42.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.11.71:22-139.178.89.65:39584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:42.360498 kernel: kauditd_printk_skb: 23 callbacks suppressed Dec 12 17:28:42.360599 kernel: audit: type=1130 audit(1765560522.355:798): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.11.71:22-139.178.89.65:39584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:43.244000 audit[5417]: USER_ACCT pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.248000 audit[5417]: CRED_ACQ pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.250293 sshd[5417]: Accepted publickey for core from 139.178.89.65 port 39584 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:43.250358 sshd-session[5417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:43.252921 kernel: audit: type=1101 audit(1765560523.244:799): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.253026 kernel: audit: type=1103 audit(1765560523.248:800): pid=5417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.253049 kernel: audit: type=1006 audit(1765560523.248:801): pid=5417 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 12 17:28:43.248000 audit[5417]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc0aafb0 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:43.258218 kernel: audit: type=1300 audit(1765560523.248:801): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc0aafb0 a2=3 a3=0 items=0 ppid=1 pid=5417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:43.258310 kernel: audit: type=1327 audit(1765560523.248:801): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:43.248000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:43.264093 systemd-logind[1666]: New session 12 of user core. Dec 12 17:28:43.272630 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 12 17:28:43.277000 audit[5417]: USER_START pid=5417 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.287974 kernel: audit: type=1105 audit(1765560523.277:802): pid=5417 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.288083 kernel: audit: type=1103 audit(1765560523.280:803): pid=5420 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.280000 audit[5420]: CRED_ACQ pid=5420 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.823712 sshd[5420]: Connection closed by 139.178.89.65 port 39584 Dec 12 17:28:43.824284 sshd-session[5417]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:43.824000 audit[5417]: USER_END pid=5417 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.828841 systemd[1]: sshd@11-10.0.11.71:22-139.178.89.65:39584.service: Deactivated successfully. Dec 12 17:28:43.825000 audit[5417]: CRED_DISP pid=5417 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.830948 systemd[1]: session-12.scope: Deactivated successfully. Dec 12 17:28:43.832229 systemd-logind[1666]: Session 12 logged out. Waiting for processes to exit. Dec 12 17:28:43.832561 kernel: audit: type=1106 audit(1765560523.824:804): pid=5417 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.832845 kernel: audit: type=1104 audit(1765560523.825:805): pid=5417 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:43.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.11.71:22-139.178.89.65:39584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:43.833610 systemd-logind[1666]: Removed session 12. Dec 12 17:28:44.972036 kubelet[2892]: E1212 17:28:44.971995 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:28:46.969566 kubelet[2892]: E1212 17:28:46.969524 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:28:46.969949 kubelet[2892]: E1212 17:28:46.969861 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:28:48.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.11.71:22-139.178.89.65:39598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:48.999009 systemd[1]: Started sshd@12-10.0.11.71:22-139.178.89.65:39598.service - OpenSSH per-connection server daemon (139.178.89.65:39598). Dec 12 17:28:49.002401 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:49.002642 kernel: audit: type=1130 audit(1765560528.997:807): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.11.71:22-139.178.89.65:39598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:49.835000 audit[5434]: USER_ACCT pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.836373 sshd[5434]: Accepted publickey for core from 139.178.89.65 port 39598 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:49.839072 sshd-session[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:49.837000 audit[5434]: CRED_ACQ pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.842621 kernel: audit: type=1101 audit(1765560529.835:808): pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.842707 kernel: audit: type=1103 audit(1765560529.837:809): pid=5434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.844545 kernel: audit: type=1006 audit(1765560529.837:810): pid=5434 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 12 17:28:49.837000 audit[5434]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc96130 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:49.848160 kernel: audit: type=1300 audit(1765560529.837:810): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc96130 a2=3 a3=0 items=0 ppid=1 pid=5434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:49.837000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:49.849629 kernel: audit: type=1327 audit(1765560529.837:810): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:49.850656 systemd-logind[1666]: New session 13 of user core. Dec 12 17:28:49.858347 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 12 17:28:49.859000 audit[5434]: USER_START pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.860000 audit[5437]: CRED_ACQ pid=5437 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.867084 kernel: audit: type=1105 audit(1765560529.859:811): pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.867178 kernel: audit: type=1103 audit(1765560529.860:812): pid=5437 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:49.970410 kubelet[2892]: E1212 17:28:49.970308 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:28:50.377094 sshd[5437]: Connection closed by 139.178.89.65 port 39598 Dec 12 17:28:50.378718 sshd-session[5434]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:50.378000 audit[5434]: USER_END pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:50.382552 systemd[1]: sshd@12-10.0.11.71:22-139.178.89.65:39598.service: Deactivated successfully. Dec 12 17:28:50.378000 audit[5434]: CRED_DISP pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:50.384770 systemd[1]: session-13.scope: Deactivated successfully. Dec 12 17:28:50.386722 kernel: audit: type=1106 audit(1765560530.378:813): pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:50.386783 kernel: audit: type=1104 audit(1765560530.378:814): pid=5434 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:50.386713 systemd-logind[1666]: Session 13 logged out. Waiting for processes to exit. Dec 12 17:28:50.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.11.71:22-139.178.89.65:39598 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:50.387779 systemd-logind[1666]: Removed session 13. Dec 12 17:28:51.969943 kubelet[2892]: E1212 17:28:51.969743 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:28:51.970965 kubelet[2892]: E1212 17:28:51.970915 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:28:52.969569 kubelet[2892]: E1212 17:28:52.969511 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:28:55.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.11.71:22-139.178.89.65:60918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:55.545104 systemd[1]: Started sshd@13-10.0.11.71:22-139.178.89.65:60918.service - OpenSSH per-connection server daemon (139.178.89.65:60918). Dec 12 17:28:55.548677 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:28:55.548741 kernel: audit: type=1130 audit(1765560535.544:816): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.11.71:22-139.178.89.65:60918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:56.372000 audit[5477]: USER_ACCT pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.375305 sshd[5477]: Accepted publickey for core from 139.178.89.65 port 60918 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:56.377140 kernel: audit: type=1101 audit(1765560536.372:817): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.376000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.377614 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:56.382667 kernel: audit: type=1103 audit(1765560536.376:818): pid=5477 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.382725 kernel: audit: type=1006 audit(1765560536.376:819): pid=5477 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 12 17:28:56.382740 kernel: audit: type=1300 audit(1765560536.376:819): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffed15bc0 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:56.376000 audit[5477]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffed15bc0 a2=3 a3=0 items=0 ppid=1 pid=5477 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:56.382882 systemd-logind[1666]: New session 14 of user core. Dec 12 17:28:56.376000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:56.387265 kernel: audit: type=1327 audit(1765560536.376:819): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:56.392411 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 12 17:28:56.394000 audit[5477]: USER_START pid=5477 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.399148 kernel: audit: type=1105 audit(1765560536.394:820): pid=5477 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.398000 audit[5480]: CRED_ACQ pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.403164 kernel: audit: type=1103 audit(1765560536.398:821): pid=5480 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.916456 sshd[5480]: Connection closed by 139.178.89.65 port 60918 Dec 12 17:28:56.917260 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:56.918000 audit[5477]: USER_END pid=5477 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.921876 systemd[1]: sshd@13-10.0.11.71:22-139.178.89.65:60918.service: Deactivated successfully. Dec 12 17:28:56.918000 audit[5477]: CRED_DISP pid=5477 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.925864 kernel: audit: type=1106 audit(1765560536.918:822): pid=5477 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.925928 kernel: audit: type=1104 audit(1765560536.918:823): pid=5477 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:56.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.11.71:22-139.178.89.65:60918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:56.926075 systemd[1]: session-14.scope: Deactivated successfully. Dec 12 17:28:56.928157 systemd-logind[1666]: Session 14 logged out. Waiting for processes to exit. Dec 12 17:28:56.929183 systemd-logind[1666]: Removed session 14. Dec 12 17:28:56.970188 kubelet[2892]: E1212 17:28:56.969766 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:28:57.088419 systemd[1]: Started sshd@14-10.0.11.71:22-139.178.89.65:60922.service - OpenSSH per-connection server daemon (139.178.89.65:60922). Dec 12 17:28:57.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.11.71:22-139.178.89.65:60922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:57.927055 sshd[5493]: Accepted publickey for core from 139.178.89.65 port 60922 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:57.925000 audit[5493]: USER_ACCT pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:57.927000 audit[5493]: CRED_ACQ pid=5493 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:57.927000 audit[5493]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd26b7fe0 a2=3 a3=0 items=0 ppid=1 pid=5493 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:57.927000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:57.928599 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:57.932793 systemd-logind[1666]: New session 15 of user core. Dec 12 17:28:57.941276 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 12 17:28:57.943000 audit[5493]: USER_START pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:57.945000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:58.563710 sshd[5496]: Connection closed by 139.178.89.65 port 60922 Dec 12 17:28:58.564272 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Dec 12 17:28:58.564000 audit[5493]: USER_END pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:58.565000 audit[5493]: CRED_DISP pid=5493 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:58.568960 systemd[1]: sshd@14-10.0.11.71:22-139.178.89.65:60922.service: Deactivated successfully. Dec 12 17:28:58.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.11.71:22-139.178.89.65:60922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:58.572887 systemd[1]: session-15.scope: Deactivated successfully. Dec 12 17:28:58.574105 systemd-logind[1666]: Session 15 logged out. Waiting for processes to exit. Dec 12 17:28:58.576447 systemd-logind[1666]: Removed session 15. Dec 12 17:28:58.736547 systemd[1]: Started sshd@15-10.0.11.71:22-139.178.89.65:60926.service - OpenSSH per-connection server daemon (139.178.89.65:60926). Dec 12 17:28:58.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.11.71:22-139.178.89.65:60926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:28:59.567000 audit[5507]: USER_ACCT pid=5507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:59.569284 sshd[5507]: Accepted publickey for core from 139.178.89.65 port 60926 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:28:59.568000 audit[5507]: CRED_ACQ pid=5507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:59.568000 audit[5507]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc0c0860 a2=3 a3=0 items=0 ppid=1 pid=5507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:28:59.568000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:28:59.570671 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:28:59.575220 systemd-logind[1666]: New session 16 of user core. Dec 12 17:28:59.583509 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 12 17:28:59.584000 audit[5507]: USER_START pid=5507 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:28:59.586000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:00.369000 audit[5522]: NETFILTER_CFG table=filter:143 family=2 entries=26 op=nft_register_rule pid=5522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:00.369000 audit[5522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff00456d0 a2=0 a3=1 items=0 ppid=3021 pid=5522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:00.369000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:00.376000 audit[5522]: NETFILTER_CFG table=nat:144 family=2 entries=20 op=nft_register_rule pid=5522 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:00.376000 audit[5522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff00456d0 a2=0 a3=1 items=0 ppid=3021 pid=5522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:00.376000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:00.534281 sshd[5510]: Connection closed by 139.178.89.65 port 60926 Dec 12 17:29:00.535297 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:00.535000 audit[5507]: USER_END pid=5507 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:00.536000 audit[5507]: CRED_DISP pid=5507 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:00.540649 systemd[1]: sshd@15-10.0.11.71:22-139.178.89.65:60926.service: Deactivated successfully. Dec 12 17:29:00.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.11.71:22-139.178.89.65:60926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.545645 systemd[1]: session-16.scope: Deactivated successfully. Dec 12 17:29:00.546715 systemd-logind[1666]: Session 16 logged out. Waiting for processes to exit. Dec 12 17:29:00.547987 systemd-logind[1666]: Removed session 16. Dec 12 17:29:00.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.11.71:22-139.178.89.65:56168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.712723 systemd[1]: Started sshd@16-10.0.11.71:22-139.178.89.65:56168.service - OpenSSH per-connection server daemon (139.178.89.65:56168). Dec 12 17:29:00.713556 kernel: kauditd_printk_skb: 29 callbacks suppressed Dec 12 17:29:00.713594 kernel: audit: type=1130 audit(1765560540.711:845): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.11.71:22-139.178.89.65:56168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:00.969315 kubelet[2892]: E1212 17:29:00.969199 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:29:01.390000 audit[5531]: NETFILTER_CFG table=filter:145 family=2 entries=38 op=nft_register_rule pid=5531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:01.390000 audit[5531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe0558a80 a2=0 a3=1 items=0 ppid=3021 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:01.397552 kernel: audit: type=1325 audit(1765560541.390:846): table=filter:145 family=2 entries=38 op=nft_register_rule pid=5531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:01.397633 kernel: audit: type=1300 audit(1765560541.390:846): arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe0558a80 a2=0 a3=1 items=0 ppid=3021 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:01.397665 kernel: audit: type=1327 audit(1765560541.390:846): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:01.390000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:01.404000 audit[5531]: NETFILTER_CFG table=nat:146 family=2 entries=20 op=nft_register_rule pid=5531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:01.404000 audit[5531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe0558a80 a2=0 a3=1 items=0 ppid=3021 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:01.411875 kernel: audit: type=1325 audit(1765560541.404:847): table=nat:146 family=2 entries=20 op=nft_register_rule pid=5531 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:01.411946 kernel: audit: type=1300 audit(1765560541.404:847): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe0558a80 a2=0 a3=1 items=0 ppid=3021 pid=5531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:01.411965 kernel: audit: type=1327 audit(1765560541.404:847): proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:01.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:01.601583 sshd[5527]: Accepted publickey for core from 139.178.89.65 port 56168 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:29:01.600000 audit[5527]: USER_ACCT pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:01.606892 kernel: audit: type=1101 audit(1765560541.600:848): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:01.606966 kernel: audit: type=1103 audit(1765560541.605:849): pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:01.605000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:01.606744 sshd-session[5527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:01.611578 kernel: audit: type=1006 audit(1765560541.605:850): pid=5527 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 12 17:29:01.605000 audit[5527]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd2f1fe20 a2=3 a3=0 items=0 ppid=1 pid=5527 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:01.605000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:01.612638 systemd-logind[1666]: New session 17 of user core. Dec 12 17:29:01.619286 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 12 17:29:01.621000 audit[5527]: USER_START pid=5527 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:01.623000 audit[5532]: CRED_ACQ pid=5532 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:01.971534 kubelet[2892]: E1212 17:29:01.971487 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:29:01.972881 kubelet[2892]: E1212 17:29:01.971598 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:29:02.334134 sshd[5532]: Connection closed by 139.178.89.65 port 56168 Dec 12 17:29:02.334775 sshd-session[5527]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:02.334000 audit[5527]: USER_END pid=5527 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:02.334000 audit[5527]: CRED_DISP pid=5527 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:02.338194 systemd[1]: sshd@16-10.0.11.71:22-139.178.89.65:56168.service: Deactivated successfully. Dec 12 17:29:02.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.11.71:22-139.178.89.65:56168 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:02.340338 systemd[1]: session-17.scope: Deactivated successfully. Dec 12 17:29:02.342434 systemd-logind[1666]: Session 17 logged out. Waiting for processes to exit. Dec 12 17:29:02.346379 systemd-logind[1666]: Removed session 17. Dec 12 17:29:02.507367 systemd[1]: Started sshd@17-10.0.11.71:22-139.178.89.65:56170.service - OpenSSH per-connection server daemon (139.178.89.65:56170). Dec 12 17:29:02.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.11.71:22-139.178.89.65:56170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:02.969425 kubelet[2892]: E1212 17:29:02.969380 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:29:03.327000 audit[5544]: USER_ACCT pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:03.329180 sshd[5544]: Accepted publickey for core from 139.178.89.65 port 56170 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:29:03.328000 audit[5544]: CRED_ACQ pid=5544 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:03.328000 audit[5544]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd235f2a0 a2=3 a3=0 items=0 ppid=1 pid=5544 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:03.328000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:03.330402 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:03.335294 systemd-logind[1666]: New session 18 of user core. Dec 12 17:29:03.346323 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 12 17:29:03.348000 audit[5544]: USER_START pid=5544 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:03.349000 audit[5547]: CRED_ACQ pid=5547 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:03.862463 sshd[5547]: Connection closed by 139.178.89.65 port 56170 Dec 12 17:29:03.863017 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:03.863000 audit[5544]: USER_END pid=5544 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:03.863000 audit[5544]: CRED_DISP pid=5544 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:03.867050 systemd[1]: sshd@17-10.0.11.71:22-139.178.89.65:56170.service: Deactivated successfully. Dec 12 17:29:03.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.11.71:22-139.178.89.65:56170 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:03.869592 systemd[1]: session-18.scope: Deactivated successfully. Dec 12 17:29:03.870923 systemd-logind[1666]: Session 18 logged out. Waiting for processes to exit. Dec 12 17:29:03.871789 systemd-logind[1666]: Removed session 18. Dec 12 17:29:03.969417 kubelet[2892]: E1212 17:29:03.969334 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:29:04.615000 audit[5560]: NETFILTER_CFG table=filter:147 family=2 entries=26 op=nft_register_rule pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:04.615000 audit[5560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd01a0330 a2=0 a3=1 items=0 ppid=3021 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:04.615000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:04.628000 audit[5560]: NETFILTER_CFG table=nat:148 family=2 entries=104 op=nft_register_chain pid=5560 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 12 17:29:04.628000 audit[5560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffd01a0330 a2=0 a3=1 items=0 ppid=3021 pid=5560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:04.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D2D6E6F666C757368002D2D636F756E74657273 Dec 12 17:29:04.971984 kubelet[2892]: E1212 17:29:04.971830 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:29:09.029000 systemd[1]: Started sshd@18-10.0.11.71:22-139.178.89.65:56178.service - OpenSSH per-connection server daemon (139.178.89.65:56178). Dec 12 17:29:09.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.11.71:22-139.178.89.65:56178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:09.032857 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 12 17:29:09.032933 kernel: audit: type=1130 audit(1765560549.028:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.11.71:22-139.178.89.65:56178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:09.864000 audit[5562]: USER_ACCT pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.865609 sshd[5562]: Accepted publickey for core from 139.178.89.65 port 56178 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:29:09.870131 kernel: audit: type=1101 audit(1765560549.864:868): pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.869000 audit[5562]: CRED_ACQ pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.870787 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:09.876226 kernel: audit: type=1103 audit(1765560549.869:869): pid=5562 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.876320 kernel: audit: type=1006 audit(1765560549.869:870): pid=5562 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 12 17:29:09.876341 kernel: audit: type=1300 audit(1765560549.869:870): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7e1ac30 a2=3 a3=0 items=0 ppid=1 pid=5562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:09.869000 audit[5562]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7e1ac30 a2=3 a3=0 items=0 ppid=1 pid=5562 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:09.869000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:09.881029 kernel: audit: type=1327 audit(1765560549.869:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:09.885757 systemd-logind[1666]: New session 19 of user core. Dec 12 17:29:09.896812 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 12 17:29:09.898000 audit[5562]: USER_START pid=5562 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.904134 kernel: audit: type=1105 audit(1765560549.898:871): pid=5562 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.903000 audit[5565]: CRED_ACQ pid=5565 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.908132 kernel: audit: type=1103 audit(1765560549.903:872): pid=5565 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:09.969830 kubelet[2892]: E1212 17:29:09.969770 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:29:10.403141 sshd[5565]: Connection closed by 139.178.89.65 port 56178 Dec 12 17:29:10.404301 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:10.405000 audit[5562]: USER_END pid=5562 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:10.409228 systemd[1]: sshd@18-10.0.11.71:22-139.178.89.65:56178.service: Deactivated successfully. Dec 12 17:29:10.411756 systemd[1]: session-19.scope: Deactivated successfully. Dec 12 17:29:10.405000 audit[5562]: CRED_DISP pid=5562 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:10.416273 kernel: audit: type=1106 audit(1765560550.405:873): pid=5562 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:10.416398 kernel: audit: type=1104 audit(1765560550.405:874): pid=5562 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:10.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.11.71:22-139.178.89.65:56178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:10.416890 systemd-logind[1666]: Session 19 logged out. Waiting for processes to exit. Dec 12 17:29:10.418726 systemd-logind[1666]: Removed session 19. Dec 12 17:29:14.972021 kubelet[2892]: E1212 17:29:14.971950 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:29:14.972599 kubelet[2892]: E1212 17:29:14.972507 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:29:14.973153 kubelet[2892]: E1212 17:29:14.972634 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:29:15.578547 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:15.578662 kernel: audit: type=1130 audit(1765560555.575:876): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.11.71:22-139.178.89.65:38926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.11.71:22-139.178.89.65:38926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:15.576672 systemd[1]: Started sshd@19-10.0.11.71:22-139.178.89.65:38926.service - OpenSSH per-connection server daemon (139.178.89.65:38926). Dec 12 17:29:15.969129 kubelet[2892]: E1212 17:29:15.968994 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:29:15.970767 kubelet[2892]: E1212 17:29:15.970730 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:29:16.415000 audit[5581]: USER_ACCT pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.419928 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:16.420530 sshd[5581]: Accepted publickey for core from 139.178.89.65 port 38926 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:29:16.418000 audit[5581]: CRED_ACQ pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.423590 kernel: audit: type=1101 audit(1765560556.415:877): pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.423752 kernel: audit: type=1103 audit(1765560556.418:878): pid=5581 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.423799 kernel: audit: type=1006 audit(1765560556.418:879): pid=5581 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 12 17:29:16.418000 audit[5581]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7152000 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:16.429300 kernel: audit: type=1300 audit(1765560556.418:879): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff7152000 a2=3 a3=0 items=0 ppid=1 pid=5581 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:16.418000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:16.431015 kernel: audit: type=1327 audit(1765560556.418:879): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:16.431618 systemd-logind[1666]: New session 20 of user core. Dec 12 17:29:16.439279 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 12 17:29:16.440000 audit[5581]: USER_START pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.445415 kernel: audit: type=1105 audit(1765560556.440:880): pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.445516 kernel: audit: type=1103 audit(1765560556.444:881): pid=5584 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.444000 audit[5584]: CRED_ACQ pid=5584 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.951891 sshd[5584]: Connection closed by 139.178.89.65 port 38926 Dec 12 17:29:16.952366 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:16.952000 audit[5581]: USER_END pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.956559 systemd[1]: sshd@19-10.0.11.71:22-139.178.89.65:38926.service: Deactivated successfully. Dec 12 17:29:16.953000 audit[5581]: CRED_DISP pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.958263 systemd[1]: session-20.scope: Deactivated successfully. Dec 12 17:29:16.959551 systemd-logind[1666]: Session 20 logged out. Waiting for processes to exit. Dec 12 17:29:16.960270 systemd-logind[1666]: Removed session 20. Dec 12 17:29:16.960649 kernel: audit: type=1106 audit(1765560556.952:882): pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.960694 kernel: audit: type=1104 audit(1765560556.953:883): pid=5581 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:16.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.11.71:22-139.178.89.65:38926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:16.969922 kubelet[2892]: E1212 17:29:16.969810 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:29:22.120357 systemd[1]: Started sshd@20-10.0.11.71:22-139.178.89.65:34230.service - OpenSSH per-connection server daemon (139.178.89.65:34230). Dec 12 17:29:22.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.11.71:22-139.178.89.65:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:22.121580 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:22.121754 kernel: audit: type=1130 audit(1765560562.119:885): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.11.71:22-139.178.89.65:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:22.944000 audit[5600]: USER_ACCT pid=5600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.946016 sshd[5600]: Accepted publickey for core from 139.178.89.65 port 34230 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:29:22.948163 sshd-session[5600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:22.946000 audit[5600]: CRED_ACQ pid=5600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.952326 kernel: audit: type=1101 audit(1765560562.944:886): pid=5600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.952383 kernel: audit: type=1103 audit(1765560562.946:887): pid=5600 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.954765 kernel: audit: type=1006 audit(1765560562.946:888): pid=5600 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 12 17:29:22.955820 kernel: audit: type=1300 audit(1765560562.946:888): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffba23e10 a2=3 a3=0 items=0 ppid=1 pid=5600 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:22.946000 audit[5600]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffba23e10 a2=3 a3=0 items=0 ppid=1 pid=5600 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:22.954942 systemd-logind[1666]: New session 21 of user core. Dec 12 17:29:22.946000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:22.959355 kernel: audit: type=1327 audit(1765560562.946:888): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:22.963419 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 12 17:29:22.964000 audit[5600]: USER_START pid=5600 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.965000 audit[5603]: CRED_ACQ pid=5603 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.972044 kernel: audit: type=1105 audit(1765560562.964:889): pid=5600 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:22.972100 kernel: audit: type=1103 audit(1765560562.965:890): pid=5603 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:23.481479 sshd[5603]: Connection closed by 139.178.89.65 port 34230 Dec 12 17:29:23.481852 sshd-session[5600]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:23.482000 audit[5600]: USER_END pid=5600 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:23.486792 systemd[1]: sshd@20-10.0.11.71:22-139.178.89.65:34230.service: Deactivated successfully. Dec 12 17:29:23.483000 audit[5600]: CRED_DISP pid=5600 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:23.488684 systemd[1]: session-21.scope: Deactivated successfully. Dec 12 17:29:23.490994 kernel: audit: type=1106 audit(1765560563.482:891): pid=5600 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:23.491050 kernel: audit: type=1104 audit(1765560563.483:892): pid=5600 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:23.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.11.71:22-139.178.89.65:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:23.491499 systemd-logind[1666]: Session 21 logged out. Waiting for processes to exit. Dec 12 17:29:23.494998 systemd-logind[1666]: Removed session 21. Dec 12 17:29:24.969789 kubelet[2892]: E1212 17:29:24.969712 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:29:25.969156 kubelet[2892]: E1212 17:29:25.969075 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:29:26.973300 kubelet[2892]: E1212 17:29:26.973251 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:29:28.650015 systemd[1]: Started sshd@21-10.0.11.71:22-139.178.89.65:34242.service - OpenSSH per-connection server daemon (139.178.89.65:34242). Dec 12 17:29:28.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.11.71:22-139.178.89.65:34242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:28.654476 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:28.654568 kernel: audit: type=1130 audit(1765560568.649:894): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.11.71:22-139.178.89.65:34242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:29.482126 sshd[5641]: Accepted publickey for core from 139.178.89.65 port 34242 ssh2: RSA SHA256:r5D0f1fAK/zqqztByMTUofC414JXgtgtTmBwIS1lwUA Dec 12 17:29:29.480000 audit[5641]: USER_ACCT pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.484000 audit[5641]: CRED_ACQ pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.486066 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 12 17:29:29.488699 kernel: audit: type=1101 audit(1765560569.480:895): pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.488766 kernel: audit: type=1103 audit(1765560569.484:896): pid=5641 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.488788 kernel: audit: type=1006 audit(1765560569.484:897): pid=5641 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 12 17:29:29.490128 systemd-logind[1666]: New session 22 of user core. Dec 12 17:29:29.484000 audit[5641]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd980450 a2=3 a3=0 items=0 ppid=1 pid=5641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:29.494135 kernel: audit: type=1300 audit(1765560569.484:897): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd980450 a2=3 a3=0 items=0 ppid=1 pid=5641 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:29.494207 kernel: audit: type=1327 audit(1765560569.484:897): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:29.484000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Dec 12 17:29:29.502367 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 12 17:29:29.504000 audit[5641]: USER_START pid=5641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.509149 kernel: audit: type=1105 audit(1765560569.504:898): pid=5641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.508000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.513161 kernel: audit: type=1103 audit(1765560569.508:899): pid=5644 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:29.969635 kubelet[2892]: E1212 17:29:29.969587 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:29:29.969635 kubelet[2892]: E1212 17:29:29.969610 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:29:29.970058 kubelet[2892]: E1212 17:29:29.969672 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:29:30.014758 sshd[5644]: Connection closed by 139.178.89.65 port 34242 Dec 12 17:29:30.015217 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Dec 12 17:29:30.015000 audit[5641]: USER_END pid=5641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:30.019127 systemd[1]: sshd@21-10.0.11.71:22-139.178.89.65:34242.service: Deactivated successfully. Dec 12 17:29:30.015000 audit[5641]: CRED_DISP pid=5641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:30.021389 systemd[1]: session-22.scope: Deactivated successfully. Dec 12 17:29:30.023245 kernel: audit: type=1106 audit(1765560570.015:900): pid=5641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:30.023311 kernel: audit: type=1104 audit(1765560570.015:901): pid=5641 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 12 17:29:30.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.11.71:22-139.178.89.65:34242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 12 17:29:30.024814 systemd-logind[1666]: Session 22 logged out. Waiting for processes to exit. Dec 12 17:29:30.025659 systemd-logind[1666]: Removed session 22. Dec 12 17:29:30.970710 kubelet[2892]: E1212 17:29:30.970502 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:29:32.382274 update_engine[1667]: I20251212 17:29:32.382206 1667 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 12 17:29:32.382274 update_engine[1667]: I20251212 17:29:32.382259 1667 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 12 17:29:32.382700 update_engine[1667]: I20251212 17:29:32.382488 1667 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 12 17:29:32.382846 update_engine[1667]: I20251212 17:29:32.382804 1667 omaha_request_params.cc:62] Current group set to beta Dec 12 17:29:32.382931 update_engine[1667]: I20251212 17:29:32.382894 1667 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 12 17:29:32.382931 update_engine[1667]: I20251212 17:29:32.382915 1667 update_attempter.cc:643] Scheduling an action processor start. Dec 12 17:29:32.382983 update_engine[1667]: I20251212 17:29:32.382930 1667 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 17:29:32.383167 update_engine[1667]: I20251212 17:29:32.383145 1667 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 12 17:29:32.383231 update_engine[1667]: I20251212 17:29:32.383194 1667 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 17:29:32.383231 update_engine[1667]: I20251212 17:29:32.383206 1667 omaha_request_action.cc:272] Request: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: Dec 12 17:29:32.383231 update_engine[1667]: I20251212 17:29:32.383212 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:32.383487 locksmithd[1724]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 12 17:29:32.386144 update_engine[1667]: I20251212 17:29:32.386091 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:32.386837 update_engine[1667]: I20251212 17:29:32.386790 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:32.393313 update_engine[1667]: E20251212 17:29:32.393270 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:29:32.393398 update_engine[1667]: I20251212 17:29:32.393349 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 12 17:29:37.969912 kubelet[2892]: E1212 17:29:37.969827 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:29:37.970306 kubelet[2892]: E1212 17:29:37.970213 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:29:39.969436 kubelet[2892]: E1212 17:29:39.969379 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:29:41.970499 kubelet[2892]: E1212 17:29:41.970453 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:29:42.388068 update_engine[1667]: I20251212 17:29:42.387965 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:42.388068 update_engine[1667]: I20251212 17:29:42.388060 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:42.388625 update_engine[1667]: I20251212 17:29:42.388577 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:42.396119 update_engine[1667]: E20251212 17:29:42.396057 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:29:42.396252 update_engine[1667]: I20251212 17:29:42.396171 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 12 17:29:44.974238 kubelet[2892]: E1212 17:29:44.974177 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:29:44.974989 containerd[1692]: time="2025-12-12T17:29:44.974942088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:45.327214 containerd[1692]: time="2025-12-12T17:29:45.327153198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:45.328604 containerd[1692]: time="2025-12-12T17:29:45.328542765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:45.328692 containerd[1692]: time="2025-12-12T17:29:45.328581125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:45.328832 kubelet[2892]: E1212 17:29:45.328772 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:45.328832 kubelet[2892]: E1212 17:29:45.328822 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:45.328947 kubelet[2892]: E1212 17:29:45.328928 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-ttqlr_calico-apiserver(3dccdef9-807b-4475-ade1-4a0bc2c4fe76): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:45.328980 kubelet[2892]: E1212 17:29:45.328962 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:29:45.970140 containerd[1692]: time="2025-12-12T17:29:45.969987023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Dec 12 17:29:46.316939 containerd[1692]: time="2025-12-12T17:29:46.316816185Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:46.318050 containerd[1692]: time="2025-12-12T17:29:46.318010231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Dec 12 17:29:46.318176 containerd[1692]: time="2025-12-12T17:29:46.318050951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:46.318274 kubelet[2892]: E1212 17:29:46.318239 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:29:46.318497 kubelet[2892]: E1212 17:29:46.318283 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Dec 12 17:29:46.318497 kubelet[2892]: E1212 17:29:46.318355 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:46.319171 containerd[1692]: time="2025-12-12T17:29:46.319145837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Dec 12 17:29:46.670223 containerd[1692]: time="2025-12-12T17:29:46.669973339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:46.671376 containerd[1692]: time="2025-12-12T17:29:46.671328826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Dec 12 17:29:46.671436 containerd[1692]: time="2025-12-12T17:29:46.671380706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:46.671596 kubelet[2892]: E1212 17:29:46.671540 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:29:46.671596 kubelet[2892]: E1212 17:29:46.671590 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Dec 12 17:29:46.671683 kubelet[2892]: E1212 17:29:46.671667 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67768bd4f8-7kbcr_calico-system(6cf9d953-a32f-4658-a2fe-c69d46e96850): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:46.671732 kubelet[2892]: E1212 17:29:46.671705 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:29:48.969866 kubelet[2892]: E1212 17:29:48.969803 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:29:51.969970 containerd[1692]: time="2025-12-12T17:29:51.969734061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Dec 12 17:29:52.315155 containerd[1692]: time="2025-12-12T17:29:52.315090416Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:52.316473 containerd[1692]: time="2025-12-12T17:29:52.316433102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Dec 12 17:29:52.316561 containerd[1692]: time="2025-12-12T17:29:52.316516063Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:52.316695 kubelet[2892]: E1212 17:29:52.316663 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:52.316994 kubelet[2892]: E1212 17:29:52.316705 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Dec 12 17:29:52.316994 kubelet[2892]: E1212 17:29:52.316785 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-c6kdq_calico-system(4c63d250-1806-4ef2-8959-7aad6322f80f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:52.316994 kubelet[2892]: E1212 17:29:52.316815 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-c6kdq" podUID="4c63d250-1806-4ef2-8959-7aad6322f80f" Dec 12 17:29:52.390552 update_engine[1667]: I20251212 17:29:52.390182 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:29:52.390552 update_engine[1667]: I20251212 17:29:52.390319 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:29:52.390963 update_engine[1667]: I20251212 17:29:52.390841 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:29:52.398509 update_engine[1667]: E20251212 17:29:52.398412 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:29:52.398661 update_engine[1667]: I20251212 17:29:52.398581 1667 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 12 17:29:52.969668 containerd[1692]: time="2025-12-12T17:29:52.969621381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Dec 12 17:29:53.304631 containerd[1692]: time="2025-12-12T17:29:53.304525402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:53.306501 containerd[1692]: time="2025-12-12T17:29:53.306465892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Dec 12 17:29:53.306580 containerd[1692]: time="2025-12-12T17:29:53.306545012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:53.306791 kubelet[2892]: E1212 17:29:53.306739 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:29:53.306791 kubelet[2892]: E1212 17:29:53.306785 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Dec 12 17:29:53.306878 kubelet[2892]: E1212 17:29:53.306857 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:53.307634 containerd[1692]: time="2025-12-12T17:29:53.307609617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Dec 12 17:29:53.619685 containerd[1692]: time="2025-12-12T17:29:53.619476042Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:53.621092 containerd[1692]: time="2025-12-12T17:29:53.621045970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Dec 12 17:29:53.621238 containerd[1692]: time="2025-12-12T17:29:53.621110370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:53.621336 kubelet[2892]: E1212 17:29:53.621281 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:29:53.621629 kubelet[2892]: E1212 17:29:53.621330 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Dec 12 17:29:53.621629 kubelet[2892]: E1212 17:29:53.621409 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-5cw4v_calico-system(131dba81-ae70-4090-a6eb-8ebf5f86d388): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:53.621629 kubelet[2892]: E1212 17:29:53.621451 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:29:54.970890 containerd[1692]: time="2025-12-12T17:29:54.970650106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:55.323717 containerd[1692]: time="2025-12-12T17:29:55.323605899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:55.325391 containerd[1692]: time="2025-12-12T17:29:55.324934865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:55.325391 containerd[1692]: time="2025-12-12T17:29:55.325027346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:55.325625 kubelet[2892]: E1212 17:29:55.325230 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:55.325625 kubelet[2892]: E1212 17:29:55.325276 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:55.325625 kubelet[2892]: E1212 17:29:55.325379 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67f4c54f9f-5ds7p_calico-apiserver(e2b17dac-df63-4a54-9c33-9908026d55bd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:55.325625 kubelet[2892]: E1212 17:29:55.325415 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-5ds7p" podUID="e2b17dac-df63-4a54-9c33-9908026d55bd" Dec 12 17:29:55.654335 systemd[1]: cri-containerd-570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a.scope: Deactivated successfully. Dec 12 17:29:55.654694 systemd[1]: cri-containerd-570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a.scope: Consumed 41.129s CPU time, 117.5M memory peak. Dec 12 17:29:55.655655 containerd[1692]: time="2025-12-12T17:29:55.655234143Z" level=info msg="received container exit event container_id:\"570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a\" id:\"570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a\" pid:3297 exit_status:1 exited_at:{seconds:1765560595 nanos:654865821}" Dec 12 17:29:55.660000 audit: BPF prog-id=151 op=UNLOAD Dec 12 17:29:55.662145 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 12 17:29:55.662207 kernel: audit: type=1334 audit(1765560595.660:903): prog-id=151 op=UNLOAD Dec 12 17:29:55.660000 audit: BPF prog-id=155 op=UNLOAD Dec 12 17:29:55.663813 kernel: audit: type=1334 audit(1765560595.660:904): prog-id=155 op=UNLOAD Dec 12 17:29:55.678271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a-rootfs.mount: Deactivated successfully. Dec 12 17:29:56.036608 systemd[1]: cri-containerd-4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa.scope: Deactivated successfully. Dec 12 17:29:56.037010 systemd[1]: cri-containerd-4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa.scope: Consumed 3.507s CPU time, 63.6M memory peak. Dec 12 17:29:56.036000 audit: BPF prog-id=266 op=LOAD Dec 12 17:29:56.038239 containerd[1692]: time="2025-12-12T17:29:56.038195209Z" level=info msg="received container exit event container_id:\"4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa\" id:\"4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa\" pid:2731 exit_status:1 exited_at:{seconds:1765560596 nanos:37840247}" Dec 12 17:29:56.037000 audit: BPF prog-id=88 op=UNLOAD Dec 12 17:29:56.039487 kernel: audit: type=1334 audit(1765560596.036:905): prog-id=266 op=LOAD Dec 12 17:29:56.039552 kernel: audit: type=1334 audit(1765560596.037:906): prog-id=88 op=UNLOAD Dec 12 17:29:56.044000 audit: BPF prog-id=103 op=UNLOAD Dec 12 17:29:56.044000 audit: BPF prog-id=107 op=UNLOAD Dec 12 17:29:56.046976 kernel: audit: type=1334 audit(1765560596.044:907): prog-id=103 op=UNLOAD Dec 12 17:29:56.047075 kernel: audit: type=1334 audit(1765560596.044:908): prog-id=107 op=UNLOAD Dec 12 17:29:56.061104 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa-rootfs.mount: Deactivated successfully. Dec 12 17:29:56.084464 kubelet[2892]: E1212 17:29:56.084431 2892 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.11.71:35678->10.0.11.118:2379: read: connection timed out" Dec 12 17:29:56.562661 kubelet[2892]: I1212 17:29:56.562625 2892 scope.go:117] "RemoveContainer" containerID="b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8" Dec 12 17:29:56.563015 kubelet[2892]: I1212 17:29:56.562909 2892 scope.go:117] "RemoveContainer" containerID="570a69d97449248968d16b77b5748a1ac9c4e3a768ffc172dafb1a338e0d0a2a" Dec 12 17:29:56.563554 kubelet[2892]: E1212 17:29:56.563087 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-2fr8f_tigera-operator(c898d5f5-e4e0-42f3-9833-780383a8e871)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-2fr8f" podUID="c898d5f5-e4e0-42f3-9833-780383a8e871" Dec 12 17:29:56.564472 containerd[1692]: time="2025-12-12T17:29:56.564437602Z" level=info msg="RemoveContainer for \"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\"" Dec 12 17:29:56.565857 kubelet[2892]: I1212 17:29:56.565836 2892 scope.go:117] "RemoveContainer" containerID="4cb6f304a9c8d839fc9f0de19f7a779e9944bbcf75b195ed9b1d5c1dc115f6fa" Dec 12 17:29:56.568355 containerd[1692]: time="2025-12-12T17:29:56.568205541Z" level=info msg="CreateContainer within sandbox \"f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 12 17:29:56.575711 containerd[1692]: time="2025-12-12T17:29:56.575672019Z" level=info msg="RemoveContainer for \"b052d4e24e31216de6e8a66aed240e1938d1f0cddc9e4769ebd14887f24974e8\" returns successfully" Dec 12 17:29:56.584302 containerd[1692]: time="2025-12-12T17:29:56.583438538Z" level=info msg="Container fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:29:56.594993 containerd[1692]: time="2025-12-12T17:29:56.594950197Z" level=info msg="CreateContainer within sandbox \"f99ac0ed413b261868de20d100267774991d4135943de08fd2712955f8d919f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc\"" Dec 12 17:29:56.595788 containerd[1692]: time="2025-12-12T17:29:56.595763281Z" level=info msg="StartContainer for \"fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc\"" Dec 12 17:29:56.597091 containerd[1692]: time="2025-12-12T17:29:56.597066528Z" level=info msg="connecting to shim fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc" address="unix:///run/containerd/s/40436e866d8bb7b4095aac010df10642d4c1c1a7e8dfaee2ccaa866d7b43c82b" protocol=ttrpc version=3 Dec 12 17:29:56.620577 systemd[1]: Started cri-containerd-fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc.scope - libcontainer container fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc. Dec 12 17:29:56.631000 audit: BPF prog-id=267 op=LOAD Dec 12 17:29:56.632000 audit: BPF prog-id=268 op=LOAD Dec 12 17:29:56.635027 kernel: audit: type=1334 audit(1765560596.631:909): prog-id=267 op=LOAD Dec 12 17:29:56.635086 kernel: audit: type=1334 audit(1765560596.632:910): prog-id=268 op=LOAD Dec 12 17:29:56.632000 audit[5718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.632000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.642728 kernel: audit: type=1300 audit(1765560596.632:910): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.642860 kernel: audit: type=1327 audit(1765560596.632:910): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.633000 audit: BPF prog-id=268 op=UNLOAD Dec 12 17:29:56.633000 audit[5718]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.633000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.634000 audit: BPF prog-id=269 op=LOAD Dec 12 17:29:56.634000 audit[5718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.634000 audit: BPF prog-id=270 op=LOAD Dec 12 17:29:56.634000 audit[5718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.634000 audit: BPF prog-id=270 op=UNLOAD Dec 12 17:29:56.634000 audit[5718]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.634000 audit: BPF prog-id=269 op=UNLOAD Dec 12 17:29:56.634000 audit[5718]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.634000 audit: BPF prog-id=271 op=LOAD Dec 12 17:29:56.634000 audit[5718]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2597 pid=5718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:29:56.634000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664663031636663346538313862643366623332333863646266326565 Dec 12 17:29:56.668261 containerd[1692]: time="2025-12-12T17:29:56.668205049Z" level=info msg="StartContainer for \"fdf01cfc4e818bd3fb3238cdbf2ee96c71aa078b5540d36c0692eed3809d0ccc\" returns successfully" Dec 12 17:29:56.969279 kubelet[2892]: E1212 17:29:56.969103 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67f4c54f9f-ttqlr" podUID="3dccdef9-807b-4475-ade1-4a0bc2c4fe76" Dec 12 17:29:57.970209 kubelet[2892]: E1212 17:29:57.970154 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67768bd4f8-7kbcr" podUID="6cf9d953-a32f-4658-a2fe-c69d46e96850" Dec 12 17:29:58.969956 containerd[1692]: time="2025-12-12T17:29:58.969752141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Dec 12 17:29:59.300747 containerd[1692]: time="2025-12-12T17:29:59.300694382Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:29:59.302562 containerd[1692]: time="2025-12-12T17:29:59.302519991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Dec 12 17:29:59.302644 containerd[1692]: time="2025-12-12T17:29:59.302596231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Dec 12 17:29:59.302788 kubelet[2892]: E1212 17:29:59.302752 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:59.303041 kubelet[2892]: E1212 17:29:59.302797 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Dec 12 17:29:59.303041 kubelet[2892]: E1212 17:29:59.302871 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-849959c995-dlz6z_calico-apiserver(f42fc071-54b4-491f-b752-90f8070727e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Dec 12 17:29:59.303041 kubelet[2892]: E1212 17:29:59.302901 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-849959c995-dlz6z" podUID="f42fc071-54b4-491f-b752-90f8070727e3" Dec 12 17:30:01.183684 systemd[1]: cri-containerd-c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8.scope: Deactivated successfully. Dec 12 17:30:01.183000 audit: BPF prog-id=272 op=LOAD Dec 12 17:30:01.184323 systemd[1]: cri-containerd-c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8.scope: Consumed 4.215s CPU time, 25.3M memory peak. Dec 12 17:30:01.185366 containerd[1692]: time="2025-12-12T17:30:01.185227915Z" level=info msg="received container exit event container_id:\"c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8\" id:\"c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8\" pid:2745 exit_status:1 exited_at:{seconds:1765560601 nanos:184839993}" Dec 12 17:30:01.185745 kernel: kauditd_printk_skb: 18 callbacks suppressed Dec 12 17:30:01.185797 kernel: audit: type=1334 audit(1765560601.183:917): prog-id=272 op=LOAD Dec 12 17:30:01.183000 audit: BPF prog-id=93 op=UNLOAD Dec 12 17:30:01.188132 kernel: audit: type=1334 audit(1765560601.183:918): prog-id=93 op=UNLOAD Dec 12 17:30:01.188000 audit: BPF prog-id=108 op=UNLOAD Dec 12 17:30:01.188000 audit: BPF prog-id=112 op=UNLOAD Dec 12 17:30:01.191145 kernel: audit: type=1334 audit(1765560601.188:919): prog-id=108 op=UNLOAD Dec 12 17:30:01.191191 kernel: audit: type=1334 audit(1765560601.188:920): prog-id=112 op=UNLOAD Dec 12 17:30:01.208444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8-rootfs.mount: Deactivated successfully. Dec 12 17:30:01.583142 kubelet[2892]: I1212 17:30:01.581680 2892 scope.go:117] "RemoveContainer" containerID="c941a336d6038bef583258bc97d1d9552f921ffca7ef635bfc6f2379406fc4c8" Dec 12 17:30:01.585156 containerd[1692]: time="2025-12-12T17:30:01.585122386Z" level=info msg="CreateContainer within sandbox \"cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 12 17:30:01.597150 containerd[1692]: time="2025-12-12T17:30:01.596159282Z" level=info msg="Container f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b: CDI devices from CRI Config.CDIDevices: []" Dec 12 17:30:01.604130 containerd[1692]: time="2025-12-12T17:30:01.604075923Z" level=info msg="CreateContainer within sandbox \"cb4c330d3d6620a5287e065f4a8ab946ddd93f2c565fac375c8636ccdbc471c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b\"" Dec 12 17:30:01.604652 containerd[1692]: time="2025-12-12T17:30:01.604630925Z" level=info msg="StartContainer for \"f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b\"" Dec 12 17:30:01.605715 containerd[1692]: time="2025-12-12T17:30:01.605680531Z" level=info msg="connecting to shim f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b" address="unix:///run/containerd/s/0d51fb46d4ac48db7714c41a359da71a02e1cba2bc75d1ee8222e55cc445e105" protocol=ttrpc version=3 Dec 12 17:30:01.626382 systemd[1]: Started cri-containerd-f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b.scope - libcontainer container f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b. Dec 12 17:30:01.635000 audit: BPF prog-id=273 op=LOAD Dec 12 17:30:01.636000 audit: BPF prog-id=274 op=LOAD Dec 12 17:30:01.638178 kernel: audit: type=1334 audit(1765560601.635:921): prog-id=273 op=LOAD Dec 12 17:30:01.636000 audit[5779]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.639237 kernel: audit: type=1334 audit(1765560601.636:922): prog-id=274 op=LOAD Dec 12 17:30:01.645827 kernel: audit: type=1300 audit(1765560601.636:922): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.645907 kernel: audit: type=1327 audit(1765560601.636:922): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.636000 audit: BPF prog-id=274 op=UNLOAD Dec 12 17:30:01.646893 kernel: audit: type=1334 audit(1765560601.636:923): prog-id=274 op=UNLOAD Dec 12 17:30:01.636000 audit[5779]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.650349 kernel: audit: type=1300 audit(1765560601.636:923): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.636000 audit: BPF prog-id=275 op=LOAD Dec 12 17:30:01.636000 audit[5779]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.637000 audit: BPF prog-id=276 op=LOAD Dec 12 17:30:01.637000 audit[5779]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.637000 audit: BPF prog-id=276 op=UNLOAD Dec 12 17:30:01.637000 audit[5779]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.637000 audit: BPF prog-id=275 op=UNLOAD Dec 12 17:30:01.637000 audit[5779]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.637000 audit: BPF prog-id=277 op=LOAD Dec 12 17:30:01.637000 audit[5779]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2626 pid=5779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 12 17:30:01.637000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6639386233653135623065363962623263613439326339656235646232 Dec 12 17:30:01.669648 containerd[1692]: time="2025-12-12T17:30:01.669594535Z" level=info msg="StartContainer for \"f98b3e15b0e69bb2ca492c9eb5db2af711f140a8e1e01725dff229d3a474e93b\" returns successfully" Dec 12 17:30:02.380230 update_engine[1667]: I20251212 17:30:02.380142 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:30:02.380230 update_engine[1667]: I20251212 17:30:02.380232 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:30:02.380605 update_engine[1667]: I20251212 17:30:02.380553 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:30:02.388554 update_engine[1667]: E20251212 17:30:02.388509 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:30:02.388643 update_engine[1667]: I20251212 17:30:02.388588 1667 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 17:30:02.388643 update_engine[1667]: I20251212 17:30:02.388597 1667 omaha_request_action.cc:617] Omaha request response: Dec 12 17:30:02.388692 update_engine[1667]: E20251212 17:30:02.388676 1667 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 12 17:30:02.388714 update_engine[1667]: I20251212 17:30:02.388692 1667 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 12 17:30:02.388714 update_engine[1667]: I20251212 17:30:02.388697 1667 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:30:02.388714 update_engine[1667]: I20251212 17:30:02.388701 1667 update_attempter.cc:306] Processing Done. Dec 12 17:30:02.388773 update_engine[1667]: E20251212 17:30:02.388715 1667 update_attempter.cc:619] Update failed. Dec 12 17:30:02.388773 update_engine[1667]: I20251212 17:30:02.388720 1667 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 12 17:30:02.388773 update_engine[1667]: I20251212 17:30:02.388723 1667 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 12 17:30:02.388773 update_engine[1667]: I20251212 17:30:02.388727 1667 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 12 17:30:02.388858 update_engine[1667]: I20251212 17:30:02.388792 1667 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 12 17:30:02.388858 update_engine[1667]: I20251212 17:30:02.388810 1667 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 12 17:30:02.388858 update_engine[1667]: I20251212 17:30:02.388819 1667 omaha_request_action.cc:272] Request: Dec 12 17:30:02.388858 update_engine[1667]: Dec 12 17:30:02.388858 update_engine[1667]: Dec 12 17:30:02.388858 update_engine[1667]: Dec 12 17:30:02.388858 update_engine[1667]: Dec 12 17:30:02.388858 update_engine[1667]: Dec 12 17:30:02.388858 update_engine[1667]: Dec 12 17:30:02.388858 update_engine[1667]: I20251212 17:30:02.388825 1667 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 12 17:30:02.388858 update_engine[1667]: I20251212 17:30:02.388844 1667 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 12 17:30:02.389436 locksmithd[1724]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 12 17:30:02.389830 update_engine[1667]: I20251212 17:30:02.389786 1667 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 12 17:30:02.396250 update_engine[1667]: E20251212 17:30:02.396042 1667 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396138 1667 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396149 1667 omaha_request_action.cc:617] Omaha request response: Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396155 1667 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396159 1667 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396163 1667 update_attempter.cc:306] Processing Done. Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396168 1667 update_attempter.cc:310] Error event sent. Dec 12 17:30:02.396250 update_engine[1667]: I20251212 17:30:02.396176 1667 update_check_scheduler.cc:74] Next update check in 42m59s Dec 12 17:30:02.396752 locksmithd[1724]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 12 17:30:02.971352 containerd[1692]: time="2025-12-12T17:30:02.971298508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Dec 12 17:30:03.390130 containerd[1692]: time="2025-12-12T17:30:03.390065795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Dec 12 17:30:03.391710 containerd[1692]: time="2025-12-12T17:30:03.391675163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Dec 12 17:30:03.391860 containerd[1692]: time="2025-12-12T17:30:03.391759644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Dec 12 17:30:03.391932 kubelet[2892]: E1212 17:30:03.391895 2892 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:03.392214 kubelet[2892]: E1212 17:30:03.391952 2892 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Dec 12 17:30:03.392214 kubelet[2892]: E1212 17:30:03.392040 2892 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7db4fd4bfb-cz8pj_calico-system(9edb9d6f-c2b4-4544-aec2-09be399b44d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Dec 12 17:30:03.392214 kubelet[2892]: E1212 17:30:03.392069 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db4fd4bfb-cz8pj" podUID="9edb9d6f-c2b4-4544-aec2-09be399b44d1" Dec 12 17:30:05.945175 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Dec 12 17:30:05.970025 kubelet[2892]: E1212 17:30:05.969966 2892 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5cw4v" podUID="131dba81-ae70-4090-a6eb-8ebf5f86d388" Dec 12 17:30:06.085481 kubelet[2892]: E1212 17:30:06.085278 2892 controller.go:195] "Failed to update lease" err="Put \"https://10.0.11.71:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-e-d121438740?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"