Jan 14 00:54:42.422244 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 00:54:42.422267 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 13 22:00:26 -00 2026 Jan 14 00:54:42.422278 kernel: KASLR enabled Jan 14 00:54:42.422284 kernel: efi: EFI v2.7 by EDK II Jan 14 00:54:42.422290 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438357218 Jan 14 00:54:42.422296 kernel: random: crng init done Jan 14 00:54:42.422304 kernel: secureboot: Secure boot disabled Jan 14 00:54:42.422310 kernel: ACPI: Early table checksum verification disabled Jan 14 00:54:42.422316 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Jan 14 00:54:42.422323 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Jan 14 00:54:42.422330 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422336 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422342 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422348 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422357 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422364 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422371 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422377 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422384 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422390 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 00:54:42.422397 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 00:54:42.422403 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 14 00:54:42.422410 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 00:54:42.422417 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Jan 14 00:54:42.422424 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Jan 14 00:54:42.422430 kernel: Zone ranges: Jan 14 00:54:42.422437 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 14 00:54:42.422443 kernel: DMA32 empty Jan 14 00:54:42.422450 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Jan 14 00:54:42.422456 kernel: Device empty Jan 14 00:54:42.422463 kernel: Movable zone start for each node Jan 14 00:54:42.422469 kernel: Early memory node ranges Jan 14 00:54:42.422475 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Jan 14 00:54:42.422482 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Jan 14 00:54:42.422488 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Jan 14 00:54:42.422496 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Jan 14 00:54:42.422503 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Jan 14 00:54:42.422509 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Jan 14 00:54:42.422516 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 14 00:54:42.422523 kernel: psci: probing for conduit method from ACPI. Jan 14 00:54:42.422532 kernel: psci: PSCIv1.3 detected in firmware. Jan 14 00:54:42.422540 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 00:54:42.422547 kernel: psci: Trusted OS migration not required Jan 14 00:54:42.422554 kernel: psci: SMC Calling Convention v1.1 Jan 14 00:54:42.422561 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 14 00:54:42.422568 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 14 00:54:42.422575 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 14 00:54:42.422582 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Jan 14 00:54:42.422589 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Jan 14 00:54:42.422597 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 00:54:42.422604 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 00:54:42.422611 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 14 00:54:42.422618 kernel: Detected PIPT I-cache on CPU0 Jan 14 00:54:42.422625 kernel: CPU features: detected: GIC system register CPU interface Jan 14 00:54:42.422632 kernel: CPU features: detected: Spectre-v4 Jan 14 00:54:42.422639 kernel: CPU features: detected: Spectre-BHB Jan 14 00:54:42.422646 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 00:54:42.422653 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 00:54:42.422659 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 00:54:42.422666 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 00:54:42.422675 kernel: alternatives: applying boot alternatives Jan 14 00:54:42.422682 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:54:42.422704 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 14 00:54:42.422712 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 00:54:42.422719 kernel: Fallback order for Node 0: 0 Jan 14 00:54:42.422726 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Jan 14 00:54:42.422733 kernel: Policy zone: Normal Jan 14 00:54:42.422740 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 00:54:42.422747 kernel: software IO TLB: area num 4. Jan 14 00:54:42.422754 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 14 00:54:42.422763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 00:54:42.422770 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 00:54:42.422777 kernel: rcu: RCU event tracing is enabled. Jan 14 00:54:42.422785 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 00:54:42.422792 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 00:54:42.422799 kernel: Tracing variant of Tasks RCU enabled. Jan 14 00:54:42.422806 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 00:54:42.422813 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 00:54:42.422820 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:42.422827 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 00:54:42.422834 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 00:54:42.422842 kernel: GICv3: 256 SPIs implemented Jan 14 00:54:42.422849 kernel: GICv3: 0 Extended SPIs implemented Jan 14 00:54:42.422856 kernel: Root IRQ handler: gic_handle_irq Jan 14 00:54:42.422863 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 00:54:42.422869 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 14 00:54:42.422876 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 14 00:54:42.422883 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 14 00:54:42.422890 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Jan 14 00:54:42.422897 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Jan 14 00:54:42.422904 kernel: GICv3: using LPI property table @0x0000000100130000 Jan 14 00:54:42.422911 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Jan 14 00:54:42.422918 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 00:54:42.422926 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 00:54:42.422933 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 00:54:42.422940 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 00:54:42.422947 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 00:54:42.422954 kernel: arm-pv: using stolen time PV Jan 14 00:54:42.422962 kernel: Console: colour dummy device 80x25 Jan 14 00:54:42.422969 kernel: ACPI: Core revision 20240827 Jan 14 00:54:42.422976 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 00:54:42.422986 kernel: pid_max: default: 32768 minimum: 301 Jan 14 00:54:42.422993 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 00:54:42.423001 kernel: landlock: Up and running. Jan 14 00:54:42.423008 kernel: SELinux: Initializing. Jan 14 00:54:42.423015 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:54:42.423023 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 00:54:42.423030 kernel: rcu: Hierarchical SRCU implementation. Jan 14 00:54:42.423038 kernel: rcu: Max phase no-delay instances is 400. Jan 14 00:54:42.423047 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 00:54:42.423054 kernel: Remapping and enabling EFI services. Jan 14 00:54:42.423061 kernel: smp: Bringing up secondary CPUs ... Jan 14 00:54:42.423068 kernel: Detected PIPT I-cache on CPU1 Jan 14 00:54:42.423076 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 14 00:54:42.423083 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Jan 14 00:54:42.423090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 00:54:42.423099 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 00:54:42.423107 kernel: Detected PIPT I-cache on CPU2 Jan 14 00:54:42.423119 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 14 00:54:42.423128 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Jan 14 00:54:42.423136 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 00:54:42.423143 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 14 00:54:42.423150 kernel: Detected PIPT I-cache on CPU3 Jan 14 00:54:42.423158 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 14 00:54:42.423167 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Jan 14 00:54:42.423175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 00:54:42.423182 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 14 00:54:42.423190 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 00:54:42.423197 kernel: SMP: Total of 4 processors activated. Jan 14 00:54:42.423205 kernel: CPU: All CPU(s) started at EL1 Jan 14 00:54:42.423213 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 00:54:42.423221 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 00:54:42.423229 kernel: CPU features: detected: Common not Private translations Jan 14 00:54:42.423237 kernel: CPU features: detected: CRC32 instructions Jan 14 00:54:42.423245 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 14 00:54:42.423252 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 00:54:42.423260 kernel: CPU features: detected: LSE atomic instructions Jan 14 00:54:42.423269 kernel: CPU features: detected: Privileged Access Never Jan 14 00:54:42.423276 kernel: CPU features: detected: RAS Extension Support Jan 14 00:54:42.423284 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 00:54:42.423292 kernel: alternatives: applying system-wide alternatives Jan 14 00:54:42.423299 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jan 14 00:54:42.423307 kernel: Memory: 16324432K/16777216K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 430000K reserved, 16384K cma-reserved) Jan 14 00:54:42.423315 kernel: devtmpfs: initialized Jan 14 00:54:42.423324 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 00:54:42.423332 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 00:54:42.423339 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 00:54:42.423347 kernel: 0 pages in range for non-PLT usage Jan 14 00:54:42.423355 kernel: 515168 pages in range for PLT usage Jan 14 00:54:42.423362 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 00:54:42.423370 kernel: SMBIOS 3.0.0 present. Jan 14 00:54:42.423377 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jan 14 00:54:42.423386 kernel: DMI: Memory slots populated: 1/1 Jan 14 00:54:42.423394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 00:54:42.423401 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Jan 14 00:54:42.423409 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 00:54:42.423417 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 00:54:42.423424 kernel: audit: initializing netlink subsys (disabled) Jan 14 00:54:42.423432 kernel: audit: type=2000 audit(0.037:1): state=initialized audit_enabled=0 res=1 Jan 14 00:54:42.423441 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 00:54:42.423448 kernel: cpuidle: using governor menu Jan 14 00:54:42.423456 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 00:54:42.423464 kernel: ASID allocator initialised with 32768 entries Jan 14 00:54:42.423471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 00:54:42.423479 kernel: Serial: AMBA PL011 UART driver Jan 14 00:54:42.423486 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 00:54:42.423496 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 00:54:42.423503 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 00:54:42.423511 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 00:54:42.423518 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 00:54:42.423526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 00:54:42.423533 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 00:54:42.423541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 00:54:42.423549 kernel: ACPI: Added _OSI(Module Device) Jan 14 00:54:42.423557 kernel: ACPI: Added _OSI(Processor Device) Jan 14 00:54:42.423565 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 00:54:42.423572 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 00:54:42.423580 kernel: ACPI: Interpreter enabled Jan 14 00:54:42.423587 kernel: ACPI: Using GIC for interrupt routing Jan 14 00:54:42.423595 kernel: ACPI: MCFG table detected, 1 entries Jan 14 00:54:42.423602 kernel: ACPI: CPU0 has been hot-added Jan 14 00:54:42.423611 kernel: ACPI: CPU1 has been hot-added Jan 14 00:54:42.423619 kernel: ACPI: CPU2 has been hot-added Jan 14 00:54:42.423626 kernel: ACPI: CPU3 has been hot-added Jan 14 00:54:42.423634 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 14 00:54:42.423642 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 00:54:42.423649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 00:54:42.423828 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 00:54:42.423924 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 00:54:42.424010 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 00:54:42.424092 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 14 00:54:42.424174 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 14 00:54:42.424184 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 14 00:54:42.424192 kernel: PCI host bridge to bus 0000:00 Jan 14 00:54:42.424283 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 14 00:54:42.424362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 14 00:54:42.424436 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 14 00:54:42.424529 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 00:54:42.424636 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 14 00:54:42.424748 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.424841 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Jan 14 00:54:42.424924 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 14 00:54:42.425006 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Jan 14 00:54:42.425087 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 14 00:54:42.425176 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.425262 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Jan 14 00:54:42.425344 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 14 00:54:42.425426 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Jan 14 00:54:42.425516 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.425607 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Jan 14 00:54:42.425717 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 14 00:54:42.425816 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Jan 14 00:54:42.425909 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 14 00:54:42.426007 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.426113 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Jan 14 00:54:42.426213 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 14 00:54:42.426301 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 14 00:54:42.426394 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.426479 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Jan 14 00:54:42.426562 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 14 00:54:42.426643 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Jan 14 00:54:42.426738 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 14 00:54:42.426831 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.426913 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Jan 14 00:54:42.426995 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 14 00:54:42.427076 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Jan 14 00:54:42.427157 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 14 00:54:42.427246 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.427330 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Jan 14 00:54:42.427411 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 14 00:54:42.427499 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.427582 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Jan 14 00:54:42.427662 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 14 00:54:42.427784 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.427873 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Jan 14 00:54:42.427955 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 14 00:54:42.428046 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.428130 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Jan 14 00:54:42.428216 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 14 00:54:42.428305 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.428388 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Jan 14 00:54:42.428470 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 14 00:54:42.428594 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.428680 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Jan 14 00:54:42.428786 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 14 00:54:42.428877 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.428960 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Jan 14 00:54:42.429042 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 14 00:54:42.429130 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.429215 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Jan 14 00:54:42.429298 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 14 00:54:42.429389 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.429473 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Jan 14 00:54:42.429554 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 14 00:54:42.429664 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.429804 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Jan 14 00:54:42.429890 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 14 00:54:42.429984 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.430067 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Jan 14 00:54:42.430155 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 14 00:54:42.430248 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.430349 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Jan 14 00:54:42.430440 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 14 00:54:42.430522 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Jan 14 00:54:42.430604 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Jan 14 00:54:42.430702 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.430792 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Jan 14 00:54:42.430879 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 14 00:54:42.430960 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Jan 14 00:54:42.431043 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Jan 14 00:54:42.431134 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.431216 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Jan 14 00:54:42.431297 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 14 00:54:42.431382 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Jan 14 00:54:42.431465 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Jan 14 00:54:42.431555 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.431638 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Jan 14 00:54:42.431737 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 14 00:54:42.431831 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Jan 14 00:54:42.431919 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Jan 14 00:54:42.432009 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.432091 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Jan 14 00:54:42.432173 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 14 00:54:42.432254 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Jan 14 00:54:42.432336 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Jan 14 00:54:42.432427 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.432526 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Jan 14 00:54:42.432613 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 14 00:54:42.432707 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Jan 14 00:54:42.432791 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Jan 14 00:54:42.432885 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.432971 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Jan 14 00:54:42.433052 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 14 00:54:42.433133 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Jan 14 00:54:42.433215 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Jan 14 00:54:42.433304 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.433387 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Jan 14 00:54:42.433470 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 14 00:54:42.433552 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Jan 14 00:54:42.433633 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 00:54:42.435992 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.436099 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Jan 14 00:54:42.436196 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 14 00:54:42.436298 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Jan 14 00:54:42.436382 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 00:54:42.436474 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.436577 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Jan 14 00:54:42.436661 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 14 00:54:42.436763 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Jan 14 00:54:42.436848 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 00:54:42.436939 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.437023 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Jan 14 00:54:42.437105 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 14 00:54:42.437202 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Jan 14 00:54:42.437289 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 00:54:42.437384 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.437467 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Jan 14 00:54:42.437548 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 14 00:54:42.437631 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Jan 14 00:54:42.437726 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 00:54:42.437837 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.437925 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Jan 14 00:54:42.438007 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 14 00:54:42.438090 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Jan 14 00:54:42.438174 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Jan 14 00:54:42.438262 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.438344 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Jan 14 00:54:42.438425 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 14 00:54:42.438505 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Jan 14 00:54:42.438586 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Jan 14 00:54:42.438677 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.438801 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Jan 14 00:54:42.438888 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 14 00:54:42.438972 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Jan 14 00:54:42.439055 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Jan 14 00:54:42.439145 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 00:54:42.439233 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Jan 14 00:54:42.439314 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 14 00:54:42.439396 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Jan 14 00:54:42.439479 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 00:54:42.439574 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 00:54:42.439662 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Jan 14 00:54:42.439759 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 14 00:54:42.439843 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 00:54:42.439942 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 00:54:42.440027 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Jan 14 00:54:42.440120 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 14 00:54:42.440208 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Jan 14 00:54:42.440293 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 14 00:54:42.440389 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 14 00:54:42.440477 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 14 00:54:42.440593 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 00:54:42.440709 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Jan 14 00:54:42.440805 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 14 00:54:42.440899 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Jan 14 00:54:42.440984 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Jan 14 00:54:42.441068 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 14 00:54:42.441154 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 14 00:54:42.441242 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 14 00:54:42.441325 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 14 00:54:42.441410 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 14 00:54:42.441493 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 14 00:54:42.441577 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 14 00:54:42.441665 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 00:54:42.441760 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 14 00:54:42.441843 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 14 00:54:42.441929 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 00:54:42.442014 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 14 00:54:42.442097 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 14 00:54:42.442182 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 00:54:42.442265 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 14 00:54:42.442346 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 14 00:54:42.442432 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 00:54:42.442518 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 14 00:54:42.442602 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 14 00:54:42.442695 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 00:54:42.442786 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Jan 14 00:54:42.442870 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Jan 14 00:54:42.442958 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 00:54:42.443043 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 14 00:54:42.443125 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 14 00:54:42.443213 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 00:54:42.443295 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 14 00:54:42.443376 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 14 00:54:42.443466 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 14 00:54:42.443549 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Jan 14 00:54:42.443631 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Jan 14 00:54:42.443731 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 14 00:54:42.443817 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 14 00:54:42.443899 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Jan 14 00:54:42.443989 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 14 00:54:42.444071 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Jan 14 00:54:42.444153 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Jan 14 00:54:42.444240 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 14 00:54:42.444322 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Jan 14 00:54:42.444406 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Jan 14 00:54:42.444527 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 14 00:54:42.444624 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Jan 14 00:54:42.444735 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Jan 14 00:54:42.444830 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 14 00:54:42.444913 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Jan 14 00:54:42.445000 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Jan 14 00:54:42.445087 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 14 00:54:42.445169 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Jan 14 00:54:42.445250 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Jan 14 00:54:42.445340 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 14 00:54:42.445425 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Jan 14 00:54:42.445509 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Jan 14 00:54:42.445597 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 14 00:54:42.445707 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Jan 14 00:54:42.445800 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Jan 14 00:54:42.445911 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 14 00:54:42.446003 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Jan 14 00:54:42.446089 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Jan 14 00:54:42.446177 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 14 00:54:42.446261 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Jan 14 00:54:42.446344 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Jan 14 00:54:42.446431 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 14 00:54:42.446516 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Jan 14 00:54:42.446598 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Jan 14 00:54:42.446695 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 14 00:54:42.446789 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Jan 14 00:54:42.446873 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Jan 14 00:54:42.446973 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 14 00:54:42.447059 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Jan 14 00:54:42.447141 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Jan 14 00:54:42.447233 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 14 00:54:42.447316 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Jan 14 00:54:42.447411 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Jan 14 00:54:42.447502 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 14 00:54:42.447585 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Jan 14 00:54:42.447667 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Jan 14 00:54:42.447771 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 14 00:54:42.447854 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Jan 14 00:54:42.447961 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Jan 14 00:54:42.450323 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 14 00:54:42.450440 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Jan 14 00:54:42.450524 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Jan 14 00:54:42.450614 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 14 00:54:42.450712 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Jan 14 00:54:42.450825 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Jan 14 00:54:42.450914 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 14 00:54:42.450997 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Jan 14 00:54:42.451079 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Jan 14 00:54:42.451190 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 14 00:54:42.451276 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Jan 14 00:54:42.451363 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Jan 14 00:54:42.451456 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 14 00:54:42.451541 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Jan 14 00:54:42.451623 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Jan 14 00:54:42.451749 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 14 00:54:42.451837 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Jan 14 00:54:42.451923 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Jan 14 00:54:42.452010 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 14 00:54:42.452092 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Jan 14 00:54:42.452175 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Jan 14 00:54:42.452262 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 14 00:54:42.452346 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 14 00:54:42.452432 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 14 00:54:42.452533 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 14 00:54:42.452625 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 14 00:54:42.452723 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 14 00:54:42.452816 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 14 00:54:42.452899 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 14 00:54:42.452991 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 14 00:54:42.453075 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 14 00:54:42.453160 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 14 00:54:42.453242 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 14 00:54:42.454786 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 14 00:54:42.454924 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 14 00:54:42.455025 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 14 00:54:42.455113 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 14 00:54:42.455202 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 14 00:54:42.455293 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 14 00:54:42.455384 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Jan 14 00:54:42.455470 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Jan 14 00:54:42.455561 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Jan 14 00:54:42.455663 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Jan 14 00:54:42.455776 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Jan 14 00:54:42.455864 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Jan 14 00:54:42.455950 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Jan 14 00:54:42.456033 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Jan 14 00:54:42.456119 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Jan 14 00:54:42.456205 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Jan 14 00:54:42.456290 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Jan 14 00:54:42.456372 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Jan 14 00:54:42.456458 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Jan 14 00:54:42.456561 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Jan 14 00:54:42.456654 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Jan 14 00:54:42.456831 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Jan 14 00:54:42.456923 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Jan 14 00:54:42.457007 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Jan 14 00:54:42.457092 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Jan 14 00:54:42.457175 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Jan 14 00:54:42.457260 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Jan 14 00:54:42.457366 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Jan 14 00:54:42.457453 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Jan 14 00:54:42.457537 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Jan 14 00:54:42.457624 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Jan 14 00:54:42.457718 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Jan 14 00:54:42.457807 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Jan 14 00:54:42.457892 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Jan 14 00:54:42.457984 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Jan 14 00:54:42.458073 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Jan 14 00:54:42.458163 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Jan 14 00:54:42.458247 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Jan 14 00:54:42.458335 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Jan 14 00:54:42.458419 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Jan 14 00:54:42.458507 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Jan 14 00:54:42.458591 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Jan 14 00:54:42.458676 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Jan 14 00:54:42.458779 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Jan 14 00:54:42.458867 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Jan 14 00:54:42.458951 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Jan 14 00:54:42.459039 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Jan 14 00:54:42.459123 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Jan 14 00:54:42.459212 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Jan 14 00:54:42.459297 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Jan 14 00:54:42.459385 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Jan 14 00:54:42.459490 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Jan 14 00:54:42.459575 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Jan 14 00:54:42.459663 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Jan 14 00:54:42.459757 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Jan 14 00:54:42.459840 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 00:54:42.459926 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Jan 14 00:54:42.460008 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 00:54:42.460092 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Jan 14 00:54:42.460176 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 00:54:42.460260 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Jan 14 00:54:42.460351 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 00:54:42.460438 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Jan 14 00:54:42.460538 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 00:54:42.460630 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Jan 14 00:54:42.460731 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 00:54:42.460820 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Jan 14 00:54:42.460902 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 00:54:42.460986 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Jan 14 00:54:42.461087 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 00:54:42.461172 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Jan 14 00:54:42.461255 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 00:54:42.461343 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Jan 14 00:54:42.461425 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Jan 14 00:54:42.461509 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Jan 14 00:54:42.461592 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Jan 14 00:54:42.461675 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Jan 14 00:54:42.461770 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Jan 14 00:54:42.461860 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Jan 14 00:54:42.461943 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Jan 14 00:54:42.462027 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Jan 14 00:54:42.462109 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Jan 14 00:54:42.462192 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Jan 14 00:54:42.462276 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Jan 14 00:54:42.462360 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Jan 14 00:54:42.462442 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.462523 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.462606 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Jan 14 00:54:42.462694 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.462784 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.462868 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Jan 14 00:54:42.462951 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.463033 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.463116 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Jan 14 00:54:42.463198 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.463282 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.463367 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Jan 14 00:54:42.463449 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.463531 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.463615 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Jan 14 00:54:42.463710 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.463796 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.463887 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Jan 14 00:54:42.464022 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.464109 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.464193 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Jan 14 00:54:42.464275 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.464357 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.464447 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Jan 14 00:54:42.464550 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.464639 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.464760 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Jan 14 00:54:42.464846 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.464928 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.465018 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Jan 14 00:54:42.465116 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.465205 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.465292 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Jan 14 00:54:42.465375 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.465485 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.465574 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Jan 14 00:54:42.465660 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.465756 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.465839 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Jan 14 00:54:42.465921 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.466014 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.466106 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Jan 14 00:54:42.466194 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.466277 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.466362 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Jan 14 00:54:42.466446 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.466528 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.466612 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Jan 14 00:54:42.466705 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.466798 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.466884 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Jan 14 00:54:42.466967 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.467050 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.467133 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 00:54:42.467216 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Jan 14 00:54:42.467303 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Jan 14 00:54:42.467388 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Jan 14 00:54:42.467473 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 00:54:42.467561 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Jan 14 00:54:42.467647 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Jan 14 00:54:42.467757 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Jan 14 00:54:42.467843 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 00:54:42.467931 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Jan 14 00:54:42.468015 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Jan 14 00:54:42.468098 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Jan 14 00:54:42.468182 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Jan 14 00:54:42.468269 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Jan 14 00:54:42.468353 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Jan 14 00:54:42.468443 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.468557 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.468645 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.468744 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.468834 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.468917 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.469001 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.469091 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.469180 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.469267 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.469352 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.469437 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.469521 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.469604 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.469696 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.469785 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.469870 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.469953 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.470039 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.470122 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.470208 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.470295 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.470381 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.470464 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.470550 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.470635 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.470738 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.470827 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.470916 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.471020 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.471106 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.471192 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.471279 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.471361 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.471445 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 00:54:42.471527 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jan 14 00:54:42.471617 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 14 00:54:42.471717 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 14 00:54:42.471802 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 14 00:54:42.471885 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 14 00:54:42.471966 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 00:54:42.472058 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 00:54:42.472153 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 14 00:54:42.472238 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 14 00:54:42.472323 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Jan 14 00:54:42.472405 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 00:54:42.472494 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 14 00:54:42.472798 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 14 00:54:42.472892 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 14 00:54:42.472976 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Jan 14 00:54:42.473065 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 00:54:42.473161 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 14 00:54:42.473245 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 14 00:54:42.473328 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Jan 14 00:54:42.473410 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 00:54:42.473501 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 14 00:54:42.473672 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 14 00:54:42.473793 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 14 00:54:42.473878 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 00:54:42.473960 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 00:54:42.474052 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 14 00:54:42.474136 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 14 00:54:42.474225 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 14 00:54:42.474307 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 00:54:42.474389 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 00:54:42.474475 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 14 00:54:42.474557 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 00:54:42.474640 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 00:54:42.474738 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 14 00:54:42.474825 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 00:54:42.474909 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 00:54:42.474997 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 14 00:54:42.475083 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 00:54:42.475167 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 00:54:42.475251 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 14 00:54:42.475334 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Jan 14 00:54:42.475416 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Jan 14 00:54:42.475501 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 14 00:54:42.475586 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Jan 14 00:54:42.475727 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Jan 14 00:54:42.475817 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 14 00:54:42.475902 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Jan 14 00:54:42.475986 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Jan 14 00:54:42.476072 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 14 00:54:42.476159 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Jan 14 00:54:42.476241 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Jan 14 00:54:42.476325 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 14 00:54:42.476408 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Jan 14 00:54:42.476490 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 14 00:54:42.476596 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 14 00:54:42.476680 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Jan 14 00:54:42.476779 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 14 00:54:42.476864 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 14 00:54:42.476950 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Jan 14 00:54:42.477035 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 14 00:54:42.477123 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 14 00:54:42.477207 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Jan 14 00:54:42.477289 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Jan 14 00:54:42.477373 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 14 00:54:42.477456 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Jan 14 00:54:42.477538 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Jan 14 00:54:42.477623 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 14 00:54:42.477718 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Jan 14 00:54:42.477802 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Jan 14 00:54:42.477884 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Jan 14 00:54:42.477968 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 14 00:54:42.478050 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Jan 14 00:54:42.478135 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Jan 14 00:54:42.478217 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Jan 14 00:54:42.478302 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 14 00:54:42.478386 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Jan 14 00:54:42.478470 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Jan 14 00:54:42.478551 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Jan 14 00:54:42.478638 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 14 00:54:42.478741 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Jan 14 00:54:42.478827 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Jan 14 00:54:42.478912 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 14 00:54:42.478998 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 14 00:54:42.479081 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Jan 14 00:54:42.479162 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Jan 14 00:54:42.479244 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 14 00:54:42.479332 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 14 00:54:42.479415 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Jan 14 00:54:42.479501 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Jan 14 00:54:42.479583 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 14 00:54:42.479667 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 14 00:54:42.479760 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Jan 14 00:54:42.479846 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Jan 14 00:54:42.479928 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Jan 14 00:54:42.480016 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 14 00:54:42.480098 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Jan 14 00:54:42.480181 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Jan 14 00:54:42.480265 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Jan 14 00:54:42.480351 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 14 00:54:42.480436 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Jan 14 00:54:42.480537 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Jan 14 00:54:42.480625 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Jan 14 00:54:42.480723 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 14 00:54:42.480810 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Jan 14 00:54:42.480892 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Jan 14 00:54:42.480974 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Jan 14 00:54:42.481062 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 14 00:54:42.481146 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Jan 14 00:54:42.481228 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Jan 14 00:54:42.481309 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Jan 14 00:54:42.481395 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 14 00:54:42.481479 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Jan 14 00:54:42.481565 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Jan 14 00:54:42.481648 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 14 00:54:42.481761 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 14 00:54:42.481849 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Jan 14 00:54:42.481933 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Jan 14 00:54:42.482016 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 14 00:54:42.482101 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 14 00:54:42.482190 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Jan 14 00:54:42.482274 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Jan 14 00:54:42.482357 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 14 00:54:42.482442 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 14 00:54:42.482525 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Jan 14 00:54:42.482607 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Jan 14 00:54:42.482696 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Jan 14 00:54:42.482786 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 14 00:54:42.482864 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 14 00:54:42.482938 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 14 00:54:42.483029 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 14 00:54:42.483109 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 00:54:42.483198 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 14 00:54:42.483275 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 00:54:42.483361 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 14 00:54:42.483438 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 00:54:42.483525 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 14 00:54:42.483605 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 00:54:42.483697 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 14 00:54:42.483784 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 00:54:42.483870 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 14 00:54:42.483949 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 00:54:42.484032 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 14 00:54:42.484114 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 00:54:42.484201 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 14 00:54:42.484279 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 00:54:42.484363 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 14 00:54:42.484440 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 00:54:42.484543 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Jan 14 00:54:42.484625 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Jan 14 00:54:42.484730 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Jan 14 00:54:42.484811 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Jan 14 00:54:42.484897 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Jan 14 00:54:42.484977 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Jan 14 00:54:42.485063 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Jan 14 00:54:42.485141 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Jan 14 00:54:42.485226 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Jan 14 00:54:42.485333 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 14 00:54:42.485426 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Jan 14 00:54:42.485504 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 14 00:54:42.485589 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Jan 14 00:54:42.485667 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 14 00:54:42.485769 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Jan 14 00:54:42.485882 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Jan 14 00:54:42.485973 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Jan 14 00:54:42.486053 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Jan 14 00:54:42.486168 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Jan 14 00:54:42.486248 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Jan 14 00:54:42.486328 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Jan 14 00:54:42.486417 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Jan 14 00:54:42.486495 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Jan 14 00:54:42.486572 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Jan 14 00:54:42.486654 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Jan 14 00:54:42.486745 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Jan 14 00:54:42.486831 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Jan 14 00:54:42.486915 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Jan 14 00:54:42.486992 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Jan 14 00:54:42.487068 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 14 00:54:42.487152 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Jan 14 00:54:42.487231 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Jan 14 00:54:42.487307 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 14 00:54:42.487415 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Jan 14 00:54:42.487501 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Jan 14 00:54:42.487606 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 14 00:54:42.487720 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Jan 14 00:54:42.487807 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Jan 14 00:54:42.487886 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Jan 14 00:54:42.487973 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Jan 14 00:54:42.488050 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Jan 14 00:54:42.488128 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Jan 14 00:54:42.488226 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 14 00:54:42.488311 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Jan 14 00:54:42.488387 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Jan 14 00:54:42.488474 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Jan 14 00:54:42.488565 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Jan 14 00:54:42.488642 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Jan 14 00:54:42.488772 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Jan 14 00:54:42.488854 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Jan 14 00:54:42.488930 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Jan 14 00:54:42.489013 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Jan 14 00:54:42.489090 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Jan 14 00:54:42.489184 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 14 00:54:42.489276 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Jan 14 00:54:42.489355 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Jan 14 00:54:42.489432 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 14 00:54:42.489520 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Jan 14 00:54:42.489598 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Jan 14 00:54:42.489677 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 14 00:54:42.489770 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Jan 14 00:54:42.489849 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Jan 14 00:54:42.489925 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Jan 14 00:54:42.489935 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 14 00:54:42.489944 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 14 00:54:42.489953 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 14 00:54:42.489963 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 14 00:54:42.489972 kernel: iommu: Default domain type: Translated Jan 14 00:54:42.489981 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 00:54:42.489989 kernel: efivars: Registered efivars operations Jan 14 00:54:42.489997 kernel: vgaarb: loaded Jan 14 00:54:42.490005 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 00:54:42.490013 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 00:54:42.490023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 00:54:42.490032 kernel: pnp: PnP ACPI init Jan 14 00:54:42.490124 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 14 00:54:42.490137 kernel: pnp: PnP ACPI: found 1 devices Jan 14 00:54:42.490145 kernel: NET: Registered PF_INET protocol family Jan 14 00:54:42.490153 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 00:54:42.490164 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Jan 14 00:54:42.490172 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 00:54:42.490180 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 00:54:42.490189 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 00:54:42.490197 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jan 14 00:54:42.490205 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 14 00:54:42.490213 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 14 00:54:42.490224 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 00:54:42.490315 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 14 00:54:42.490327 kernel: PCI: CLS 0 bytes, default 64 Jan 14 00:54:42.490336 kernel: kvm [1]: HYP mode not available Jan 14 00:54:42.490344 kernel: Initialise system trusted keyrings Jan 14 00:54:42.490352 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Jan 14 00:54:42.490384 kernel: Key type asymmetric registered Jan 14 00:54:42.490395 kernel: Asymmetric key parser 'x509' registered Jan 14 00:54:42.490403 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 00:54:42.490412 kernel: io scheduler mq-deadline registered Jan 14 00:54:42.490420 kernel: io scheduler kyber registered Jan 14 00:54:42.490428 kernel: io scheduler bfq registered Jan 14 00:54:42.490436 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 14 00:54:42.490527 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Jan 14 00:54:42.490611 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Jan 14 00:54:42.490708 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.490797 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Jan 14 00:54:42.490881 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Jan 14 00:54:42.490964 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.491051 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Jan 14 00:54:42.491139 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Jan 14 00:54:42.491224 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.491310 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Jan 14 00:54:42.491394 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Jan 14 00:54:42.491477 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.491563 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Jan 14 00:54:42.491646 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Jan 14 00:54:42.491740 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.491827 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Jan 14 00:54:42.491911 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Jan 14 00:54:42.491995 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.492081 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Jan 14 00:54:42.492164 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Jan 14 00:54:42.492247 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.492336 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Jan 14 00:54:42.492422 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Jan 14 00:54:42.492518 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.492530 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 14 00:54:42.492638 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Jan 14 00:54:42.492749 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Jan 14 00:54:42.492857 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.492947 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Jan 14 00:54:42.493029 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Jan 14 00:54:42.493111 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.493195 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Jan 14 00:54:42.493278 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Jan 14 00:54:42.493362 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.493463 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Jan 14 00:54:42.493553 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Jan 14 00:54:42.493639 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.493736 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Jan 14 00:54:42.493830 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Jan 14 00:54:42.493915 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.494011 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Jan 14 00:54:42.494099 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Jan 14 00:54:42.494181 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.494265 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Jan 14 00:54:42.494348 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Jan 14 00:54:42.494447 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.494534 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Jan 14 00:54:42.494616 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Jan 14 00:54:42.494708 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.494720 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 14 00:54:42.494803 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Jan 14 00:54:42.494887 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Jan 14 00:54:42.494972 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.495056 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Jan 14 00:54:42.495139 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Jan 14 00:54:42.495220 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.495306 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Jan 14 00:54:42.495388 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Jan 14 00:54:42.495471 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.495557 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Jan 14 00:54:42.495639 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Jan 14 00:54:42.495730 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.495817 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Jan 14 00:54:42.495899 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Jan 14 00:54:42.495982 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.496069 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Jan 14 00:54:42.496152 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Jan 14 00:54:42.496234 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.496318 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Jan 14 00:54:42.496400 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Jan 14 00:54:42.496483 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.496585 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Jan 14 00:54:42.496671 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Jan 14 00:54:42.496770 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.496783 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 14 00:54:42.496865 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Jan 14 00:54:42.496948 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Jan 14 00:54:42.497030 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.497117 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Jan 14 00:54:42.497199 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Jan 14 00:54:42.497280 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.497365 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Jan 14 00:54:42.497448 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Jan 14 00:54:42.497530 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.497617 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Jan 14 00:54:42.497711 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Jan 14 00:54:42.497796 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.497881 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Jan 14 00:54:42.497964 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Jan 14 00:54:42.498047 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.498135 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Jan 14 00:54:42.498219 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Jan 14 00:54:42.498302 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.498387 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Jan 14 00:54:42.498470 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Jan 14 00:54:42.498553 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.498641 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Jan 14 00:54:42.498736 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Jan 14 00:54:42.498822 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.498907 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Jan 14 00:54:42.498990 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Jan 14 00:54:42.499073 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 00:54:42.499085 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 14 00:54:42.499095 kernel: ACPI: button: Power Button [PWRB] Jan 14 00:54:42.499182 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Jan 14 00:54:42.499273 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 14 00:54:42.499285 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 00:54:42.499293 kernel: thunder_xcv, ver 1.0 Jan 14 00:54:42.499301 kernel: thunder_bgx, ver 1.0 Jan 14 00:54:42.499309 kernel: nicpf, ver 1.0 Jan 14 00:54:42.499319 kernel: nicvf, ver 1.0 Jan 14 00:54:42.499422 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 00:54:42.499502 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T00:54:41 UTC (1768352081) Jan 14 00:54:42.499513 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 00:54:42.499522 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 00:54:42.499530 kernel: watchdog: NMI not fully supported Jan 14 00:54:42.499540 kernel: watchdog: Hard watchdog permanently disabled Jan 14 00:54:42.499549 kernel: NET: Registered PF_INET6 protocol family Jan 14 00:54:42.499557 kernel: Segment Routing with IPv6 Jan 14 00:54:42.499565 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 00:54:42.499573 kernel: NET: Registered PF_PACKET protocol family Jan 14 00:54:42.499581 kernel: Key type dns_resolver registered Jan 14 00:54:42.499589 kernel: registered taskstats version 1 Jan 14 00:54:42.499599 kernel: Loading compiled-in X.509 certificates Jan 14 00:54:42.499607 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: d16d100cda59d8093883df975a5384fda36b7d35' Jan 14 00:54:42.499615 kernel: Demotion targets for Node 0: null Jan 14 00:54:42.499623 kernel: Key type .fscrypt registered Jan 14 00:54:42.499631 kernel: Key type fscrypt-provisioning registered Jan 14 00:54:42.499639 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 00:54:42.499647 kernel: ima: Allocated hash algorithm: sha1 Jan 14 00:54:42.499655 kernel: ima: No architecture policies found Jan 14 00:54:42.499665 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 00:54:42.499673 kernel: clk: Disabling unused clocks Jan 14 00:54:42.499681 kernel: PM: genpd: Disabling unused power domains Jan 14 00:54:42.499704 kernel: Freeing unused kernel memory: 12480K Jan 14 00:54:42.499713 kernel: Run /init as init process Jan 14 00:54:42.499721 kernel: with arguments: Jan 14 00:54:42.499729 kernel: /init Jan 14 00:54:42.499739 kernel: with environment: Jan 14 00:54:42.499747 kernel: HOME=/ Jan 14 00:54:42.499755 kernel: TERM=linux Jan 14 00:54:42.499763 kernel: ACPI: bus type USB registered Jan 14 00:54:42.499771 kernel: usbcore: registered new interface driver usbfs Jan 14 00:54:42.499779 kernel: usbcore: registered new interface driver hub Jan 14 00:54:42.499788 kernel: usbcore: registered new device driver usb Jan 14 00:54:42.499882 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 00:54:42.499969 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 14 00:54:42.500054 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 00:54:42.500137 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 00:54:42.500222 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 14 00:54:42.500305 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 14 00:54:42.500416 kernel: hub 1-0:1.0: USB hub found Jan 14 00:54:42.500546 kernel: hub 1-0:1.0: 4 ports detected Jan 14 00:54:42.500660 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 00:54:42.500795 kernel: hub 2-0:1.0: USB hub found Jan 14 00:54:42.500891 kernel: hub 2-0:1.0: 4 ports detected Jan 14 00:54:42.500989 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 00:54:42.501077 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 14 00:54:42.501089 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 00:54:42.501098 kernel: GPT:25804799 != 104857599 Jan 14 00:54:42.501107 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 00:54:42.501115 kernel: GPT:25804799 != 104857599 Jan 14 00:54:42.501125 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 00:54:42.501133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 00:54:42.501142 kernel: SCSI subsystem initialized Jan 14 00:54:42.501150 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 00:54:42.501159 kernel: device-mapper: uevent: version 1.0.3 Jan 14 00:54:42.501168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 00:54:42.501176 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 00:54:42.501186 kernel: raid6: neonx8 gen() 15792 MB/s Jan 14 00:54:42.501195 kernel: raid6: neonx4 gen() 15695 MB/s Jan 14 00:54:42.501203 kernel: raid6: neonx2 gen() 13123 MB/s Jan 14 00:54:42.501212 kernel: raid6: neonx1 gen() 10379 MB/s Jan 14 00:54:42.501220 kernel: raid6: int64x8 gen() 6785 MB/s Jan 14 00:54:42.501228 kernel: raid6: int64x4 gen() 7319 MB/s Jan 14 00:54:42.501237 kernel: raid6: int64x2 gen() 6086 MB/s Jan 14 00:54:42.501245 kernel: raid6: int64x1 gen() 5005 MB/s Jan 14 00:54:42.501255 kernel: raid6: using algorithm neonx8 gen() 15792 MB/s Jan 14 00:54:42.501264 kernel: raid6: .... xor() 11954 MB/s, rmw enabled Jan 14 00:54:42.501272 kernel: raid6: using neon recovery algorithm Jan 14 00:54:42.501281 kernel: xor: measuring software checksum speed Jan 14 00:54:42.501291 kernel: 8regs : 19874 MB/sec Jan 14 00:54:42.501300 kernel: 32regs : 21704 MB/sec Jan 14 00:54:42.501310 kernel: arm64_neon : 28128 MB/sec Jan 14 00:54:42.501320 kernel: xor: using function: arm64_neon (28128 MB/sec) Jan 14 00:54:42.501427 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 00:54:42.501440 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 00:54:42.501449 kernel: BTRFS: device fsid 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (274) Jan 14 00:54:42.501459 kernel: BTRFS info (device dm-0): first mount of filesystem 68b1ce8e-a637-4e91-acf8-5a2e05e289e5 Jan 14 00:54:42.501470 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:54:42.501479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 00:54:42.501487 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 00:54:42.501495 kernel: loop: module loaded Jan 14 00:54:42.501504 kernel: loop0: detected capacity change from 0 to 91832 Jan 14 00:54:42.501512 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 00:54:42.501622 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 14 00:54:42.501638 systemd[1]: Successfully made /usr/ read-only. Jan 14 00:54:42.501650 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:54:42.501661 systemd[1]: Detected virtualization kvm. Jan 14 00:54:42.501670 systemd[1]: Detected architecture arm64. Jan 14 00:54:42.501679 systemd[1]: Running in initrd. Jan 14 00:54:42.501707 systemd[1]: No hostname configured, using default hostname. Jan 14 00:54:42.501719 systemd[1]: Hostname set to . Jan 14 00:54:42.501728 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:54:42.501738 systemd[1]: Queued start job for default target initrd.target. Jan 14 00:54:42.501747 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:54:42.501757 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:54:42.501767 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:54:42.501778 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 00:54:42.501818 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:54:42.501830 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 00:54:42.501840 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 00:54:42.501850 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:54:42.501860 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:54:42.501871 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:54:42.501881 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:54:42.501891 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:54:42.501901 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:54:42.501910 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:54:42.501920 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:54:42.501931 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:54:42.501941 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:54:42.501951 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 00:54:42.501960 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 00:54:42.501970 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:54:42.501980 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:54:42.501990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:54:42.502000 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:54:42.502010 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 00:54:42.502020 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 00:54:42.502029 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:54:42.502045 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 00:54:42.502055 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 00:54:42.502065 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 00:54:42.502076 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:54:42.502088 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:54:42.502098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:42.502109 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 00:54:42.502119 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:54:42.502128 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 00:54:42.502138 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:54:42.502180 systemd-journald[417]: Collecting audit messages is enabled. Jan 14 00:54:42.502209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 00:54:42.502218 kernel: Bridge firewalling registered Jan 14 00:54:42.502228 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:54:42.502240 kernel: audit: type=1130 audit(1768352082.433:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.502252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:54:42.502265 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:42.502275 kernel: audit: type=1130 audit(1768352082.447:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.502284 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:54:42.502294 kernel: audit: type=1130 audit(1768352082.452:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.502305 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 00:54:42.502315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:54:42.502326 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:54:42.502340 kernel: audit: type=1130 audit(1768352082.465:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.502352 kernel: audit: type=1334 audit(1768352082.467:6): prog-id=6 op=LOAD Jan 14 00:54:42.502361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:54:42.502371 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:54:42.502380 kernel: audit: type=1130 audit(1768352082.477:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.502389 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:54:42.502401 kernel: audit: type=1130 audit(1768352082.489:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.502410 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 00:54:42.502422 systemd-journald[417]: Journal started Jan 14 00:54:42.502441 systemd-journald[417]: Runtime Journal (/run/log/journal/69fd7a990d924050b415c1214747ec36) is 8M, max 319.5M, 311.5M free. Jan 14 00:54:42.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.467000 audit: BPF prog-id=6 op=LOAD Jan 14 00:54:42.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.430315 systemd-modules-load[418]: Inserted module 'br_netfilter' Jan 14 00:54:42.505386 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:54:42.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.508723 kernel: audit: type=1130 audit(1768352082.504:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.509259 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:54:42.517595 dracut-cmdline[445]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=3d3f73de8d2693594dfefd279d2c8d77c282a05a4cbc54177503d31784261f6b Jan 14 00:54:42.530239 systemd-resolved[433]: Positive Trust Anchors: Jan 14 00:54:42.530259 systemd-resolved[433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:54:42.530263 systemd-resolved[433]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:54:42.530294 systemd-resolved[433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:54:42.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.530538 systemd-tmpfiles[459]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 00:54:42.545746 kernel: audit: type=1130 audit(1768352082.540:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.534783 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:54:42.556921 systemd-resolved[433]: Defaulting to hostname 'linux'. Jan 14 00:54:42.557781 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:54:42.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.559283 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:54:42.610744 kernel: Loading iSCSI transport class v2.0-870. Jan 14 00:54:42.623716 kernel: iscsi: registered transport (tcp) Jan 14 00:54:42.640730 kernel: iscsi: registered transport (qla4xxx) Jan 14 00:54:42.640755 kernel: QLogic iSCSI HBA Driver Jan 14 00:54:42.663282 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:54:42.683347 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:54:42.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.685636 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:54:42.730432 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 00:54:42.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.734823 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 00:54:42.736378 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 00:54:42.768616 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:54:42.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.769000 audit: BPF prog-id=7 op=LOAD Jan 14 00:54:42.769000 audit: BPF prog-id=8 op=LOAD Jan 14 00:54:42.771172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:54:42.815186 systemd-udevd[695]: Using default interface naming scheme 'v257'. Jan 14 00:54:42.822973 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:54:42.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.826739 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 00:54:42.830727 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:54:42.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.839000 audit: BPF prog-id=9 op=LOAD Jan 14 00:54:42.841157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:54:42.851473 dracut-pre-trigger[790]: rd.md=0: removing MD RAID activation Jan 14 00:54:42.875280 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:54:42.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.876327 systemd-networkd[795]: lo: Link UP Jan 14 00:54:42.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.876330 systemd-networkd[795]: lo: Gained carrier Jan 14 00:54:42.876914 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:54:42.878418 systemd[1]: Reached target network.target - Network. Jan 14 00:54:42.880357 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:54:42.984656 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:54:42.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:42.987946 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 00:54:43.067593 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 00:54:43.076510 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 00:54:43.096709 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 14 00:54:43.096762 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 00:54:43.102724 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 14 00:54:43.108751 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 14 00:54:43.116044 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 00:54:43.119442 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 00:54:43.130732 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:54:43.130864 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:43.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:43.132834 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:43.133196 systemd-networkd[795]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:43.133200 systemd-networkd[795]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:54:43.134175 systemd-networkd[795]: eth0: Link UP Jan 14 00:54:43.134337 systemd-networkd[795]: eth0: Gained carrier Jan 14 00:54:43.134347 systemd-networkd[795]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:43.139519 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:43.147256 disk-uuid[874]: Primary Header is updated. Jan 14 00:54:43.147256 disk-uuid[874]: Secondary Entries is updated. Jan 14 00:54:43.147256 disk-uuid[874]: Secondary Header is updated. Jan 14 00:54:43.157830 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:43.164025 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 14 00:54:43.164273 kernel: usbcore: registered new interface driver usbhid Jan 14 00:54:43.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:43.165245 kernel: usbhid: USB HID core driver Jan 14 00:54:43.204392 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 00:54:43.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:43.207779 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:54:43.210112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:54:43.214233 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:54:43.217270 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 00:54:43.240948 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:54:43.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:43.243756 systemd-networkd[795]: eth0: DHCPv4 address 10.0.30.209/25, gateway 10.0.30.129 acquired from 10.0.30.129 Jan 14 00:54:44.195063 disk-uuid[876]: Warning: The kernel is still using the old partition table. Jan 14 00:54:44.195063 disk-uuid[876]: The new table will be used at the next reboot or after you Jan 14 00:54:44.195063 disk-uuid[876]: run partprobe(8) or kpartx(8) Jan 14 00:54:44.195063 disk-uuid[876]: The operation has completed successfully. Jan 14 00:54:44.200383 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 00:54:44.200543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 00:54:44.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.203000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.204778 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 00:54:44.258735 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (905) Jan 14 00:54:44.261159 kernel: BTRFS info (device vda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:54:44.261226 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:54:44.267047 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:44.267110 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:44.272717 kernel: BTRFS info (device vda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:54:44.273557 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 00:54:44.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.276881 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 00:54:44.416766 ignition[924]: Ignition 2.24.0 Jan 14 00:54:44.416785 ignition[924]: Stage: fetch-offline Jan 14 00:54:44.416824 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:44.420125 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:54:44.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:44.416835 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:44.417014 ignition[924]: parsed url from cmdline: "" Jan 14 00:54:44.422216 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 00:54:44.417018 ignition[924]: no config URL provided Jan 14 00:54:44.417735 ignition[924]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:54:44.417747 ignition[924]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:54:44.417752 ignition[924]: failed to fetch config: resource requires networking Jan 14 00:54:44.417941 ignition[924]: Ignition finished successfully Jan 14 00:54:44.447350 ignition[935]: Ignition 2.24.0 Jan 14 00:54:44.447371 ignition[935]: Stage: fetch Jan 14 00:54:44.447516 ignition[935]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:44.447523 ignition[935]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:44.447604 ignition[935]: parsed url from cmdline: "" Jan 14 00:54:44.447607 ignition[935]: no config URL provided Jan 14 00:54:44.447615 ignition[935]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 00:54:44.447620 ignition[935]: no config at "/usr/lib/ignition/user.ign" Jan 14 00:54:44.447818 ignition[935]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 00:54:44.447840 ignition[935]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 00:54:44.447849 ignition[935]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 14 00:54:45.218002 systemd-networkd[795]: eth0: Gained IPv6LL Jan 14 00:54:45.448068 ignition[935]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 00:54:45.448177 ignition[935]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 00:54:46.411218 ignition[935]: GET result: OK Jan 14 00:54:46.411805 ignition[935]: parsing config with SHA512: 00e3b61db9d0386aa90240213991c1b3d4cf33616da1f8782f85c1bfd918da557b2bb6ab6623a4a091901f105268c1a60836503406659eefaf176e84c1e1575c Jan 14 00:54:46.416087 unknown[935]: fetched base config from "system" Jan 14 00:54:46.416098 unknown[935]: fetched base config from "system" Jan 14 00:54:46.416446 ignition[935]: fetch: fetch complete Jan 14 00:54:46.416105 unknown[935]: fetched user config from "openstack" Jan 14 00:54:46.416450 ignition[935]: fetch: fetch passed Jan 14 00:54:46.423162 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 14 00:54:46.423188 kernel: audit: type=1130 audit(1768352086.419:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.418500 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 00:54:46.416512 ignition[935]: Ignition finished successfully Jan 14 00:54:46.420811 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 00:54:46.449327 ignition[943]: Ignition 2.24.0 Jan 14 00:54:46.449349 ignition[943]: Stage: kargs Jan 14 00:54:46.449499 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:46.449507 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:46.450374 ignition[943]: kargs: kargs passed Jan 14 00:54:46.450422 ignition[943]: Ignition finished successfully Jan 14 00:54:46.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.454367 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 00:54:46.459551 kernel: audit: type=1130 audit(1768352086.454:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.456606 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 00:54:46.489280 ignition[951]: Ignition 2.24.0 Jan 14 00:54:46.489302 ignition[951]: Stage: disks Jan 14 00:54:46.489447 ignition[951]: no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:46.489455 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:46.490227 ignition[951]: disks: disks passed Jan 14 00:54:46.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.492954 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 00:54:46.499043 kernel: audit: type=1130 audit(1768352086.494:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.490274 ignition[951]: Ignition finished successfully Jan 14 00:54:46.494560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 00:54:46.498278 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 00:54:46.499989 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:54:46.501407 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:54:46.503082 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:54:46.505578 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 00:54:46.556980 systemd-fsck[960]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 00:54:46.562048 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 00:54:46.568684 kernel: audit: type=1130 audit(1768352086.563:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.566141 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 00:54:46.660746 kernel: EXT4-fs (vda9): mounted filesystem db887ae3-d64c-46de-9f1e-de51a801ae44 r/w with ordered data mode. Quota mode: none. Jan 14 00:54:46.661816 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 00:54:46.662991 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 00:54:46.666089 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:54:46.668791 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 00:54:46.669739 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 00:54:46.682256 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 14 00:54:46.683379 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 00:54:46.683418 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:54:46.688284 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 00:54:46.692161 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 00:54:46.697701 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Jan 14 00:54:46.700422 kernel: BTRFS info (device vda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:54:46.700450 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:54:46.705152 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:46.705197 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:46.706366 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:54:46.752728 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:46.874410 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 00:54:46.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.881205 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 00:54:46.885240 kernel: audit: type=1130 audit(1768352086.879:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.885397 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 00:54:46.903081 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 00:54:46.904854 kernel: BTRFS info (device vda6): last unmount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:54:46.924186 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 00:54:46.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.929718 kernel: audit: type=1130 audit(1768352086.925:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.933954 ignition[1074]: INFO : Ignition 2.24.0 Jan 14 00:54:46.933954 ignition[1074]: INFO : Stage: mount Jan 14 00:54:46.936438 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:46.936438 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:46.936438 ignition[1074]: INFO : mount: mount passed Jan 14 00:54:46.936438 ignition[1074]: INFO : Ignition finished successfully Jan 14 00:54:46.943087 kernel: audit: type=1130 audit(1768352086.939:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:46.937502 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 00:54:47.808737 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:49.814720 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:53.826716 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:53.833468 coreos-metadata[970]: Jan 14 00:54:53.833 WARN failed to locate config-drive, using the metadata service API instead Jan 14 00:54:53.852551 coreos-metadata[970]: Jan 14 00:54:53.852 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 00:54:54.049469 coreos-metadata[970]: Jan 14 00:54:54.049 INFO Fetch successful Jan 14 00:54:54.049469 coreos-metadata[970]: Jan 14 00:54:54.049 INFO wrote hostname ci-4547-0-0-n-a666ba3d92 to /sysroot/etc/hostname Jan 14 00:54:54.052209 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 14 00:54:54.058896 kernel: audit: type=1130 audit(1768352094.052:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.058920 kernel: audit: type=1131 audit(1768352094.052:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:54.052319 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 14 00:54:54.054548 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 00:54:54.077260 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 00:54:54.108506 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1092) Jan 14 00:54:54.108553 kernel: BTRFS info (device vda6): first mount of filesystem 2541ae65-ff33-437e-8e42-0fd50870835d Jan 14 00:54:54.108565 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 00:54:54.113993 kernel: BTRFS info (device vda6): turning on async discard Jan 14 00:54:54.114070 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 00:54:54.115522 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 00:54:54.144378 ignition[1110]: INFO : Ignition 2.24.0 Jan 14 00:54:54.144378 ignition[1110]: INFO : Stage: files Jan 14 00:54:54.146120 ignition[1110]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:54.146120 ignition[1110]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:54.146120 ignition[1110]: DEBUG : files: compiled without relabeling support, skipping Jan 14 00:54:54.149685 ignition[1110]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 00:54:54.149685 ignition[1110]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 00:54:54.152396 ignition[1110]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 00:54:54.152396 ignition[1110]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 00:54:54.152396 ignition[1110]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 00:54:54.151629 unknown[1110]: wrote ssh authorized keys file for user: core Jan 14 00:54:54.158060 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 00:54:54.158060 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 14 00:54:54.231375 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 00:54:54.346743 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 14 00:54:54.346743 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:54:54.350489 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 14 00:54:54.628120 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 00:54:55.239792 ignition[1110]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 14 00:54:55.239792 ignition[1110]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 00:54:55.244300 ignition[1110]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:54:55.247111 ignition[1110]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 00:54:55.247111 ignition[1110]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 00:54:55.247111 ignition[1110]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 00:54:55.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.255384 ignition[1110]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 00:54:55.255384 ignition[1110]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:54:55.255384 ignition[1110]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 00:54:55.255384 ignition[1110]: INFO : files: files passed Jan 14 00:54:55.255384 ignition[1110]: INFO : Ignition finished successfully Jan 14 00:54:55.263096 kernel: audit: type=1130 audit(1768352095.251:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.250569 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 00:54:55.255767 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 00:54:55.259854 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 00:54:55.273865 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 00:54:55.273959 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 00:54:55.281409 kernel: audit: type=1130 audit(1768352095.274:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.281435 kernel: audit: type=1131 audit(1768352095.274:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.284318 initrd-setup-root-after-ignition[1143]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:54:55.284318 initrd-setup-root-after-ignition[1143]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:54:55.287445 initrd-setup-root-after-ignition[1147]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 00:54:55.287844 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:54:55.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.291329 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 00:54:55.296239 kernel: audit: type=1130 audit(1768352095.290:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.296216 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 00:54:55.340533 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 00:54:55.340660 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 00:54:55.348346 kernel: audit: type=1130 audit(1768352095.342:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.348379 kernel: audit: type=1131 audit(1768352095.342:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.343033 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 00:54:55.349268 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 00:54:55.351077 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 00:54:55.351941 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 00:54:55.382960 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:54:55.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.385414 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 00:54:55.389415 kernel: audit: type=1130 audit(1768352095.383:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.406886 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 00:54:55.407120 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:54:55.409447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:54:55.411312 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 00:54:55.412913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 00:54:55.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.413038 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 00:54:55.418482 kernel: audit: type=1131 audit(1768352095.413:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.417541 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 00:54:55.419424 systemd[1]: Stopped target basic.target - Basic System. Jan 14 00:54:55.420911 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 00:54:55.422464 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 00:54:55.424221 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 00:54:55.425976 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 00:54:55.427712 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 00:54:55.429447 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 00:54:55.431197 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 00:54:55.432917 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 00:54:55.434465 systemd[1]: Stopped target swap.target - Swaps. Jan 14 00:54:55.435844 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 00:54:55.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.435980 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 00:54:55.438031 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:54:55.439098 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:54:55.440796 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 00:54:55.445754 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:54:55.447342 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 00:54:55.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.447464 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 00:54:55.450068 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 00:54:55.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.450202 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 00:54:55.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.451942 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 00:54:55.452050 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 00:54:55.454509 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 00:54:55.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.456106 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 00:54:55.460000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.456907 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 00:54:55.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.457037 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:54:55.458882 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 00:54:55.458991 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:54:55.460584 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 00:54:55.460718 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 00:54:55.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.465812 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 00:54:55.465891 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 00:54:55.477534 ignition[1167]: INFO : Ignition 2.24.0 Jan 14 00:54:55.477534 ignition[1167]: INFO : Stage: umount Jan 14 00:54:55.479639 ignition[1167]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 00:54:55.479639 ignition[1167]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 00:54:55.479639 ignition[1167]: INFO : umount: umount passed Jan 14 00:54:55.479639 ignition[1167]: INFO : Ignition finished successfully Jan 14 00:54:55.482000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.486000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.477957 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 00:54:55.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.480136 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 00:54:55.480234 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 00:54:55.491000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.482817 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 00:54:55.482860 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 00:54:55.485022 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 00:54:55.485070 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 00:54:55.486433 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 00:54:55.486480 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 00:54:55.487968 systemd[1]: Stopped target network.target - Network. Jan 14 00:54:55.489740 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 00:54:55.489799 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 00:54:55.491632 systemd[1]: Stopped target paths.target - Path Units. Jan 14 00:54:55.493579 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 00:54:55.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.496959 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:54:55.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.498495 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 00:54:55.499981 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 00:54:55.501918 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 00:54:55.501961 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 00:54:55.503459 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 00:54:55.503490 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 00:54:55.505140 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 00:54:55.505163 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:54:55.507451 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 00:54:55.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.507509 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 00:54:55.508953 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 00:54:55.508995 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 00:54:55.527000 audit: BPF prog-id=6 op=UNLOAD Jan 14 00:54:55.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.511051 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 00:54:55.513102 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 00:54:55.521526 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 00:54:55.532000 audit: BPF prog-id=9 op=UNLOAD Jan 14 00:54:55.521643 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 00:54:55.525552 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 00:54:55.525637 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 00:54:55.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.530055 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 00:54:55.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.532281 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 00:54:55.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.532333 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:54:55.534828 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 00:54:55.535611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 00:54:55.535676 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 00:54:55.537634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 00:54:55.537684 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:54:55.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.540039 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 00:54:55.540084 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 00:54:55.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.541756 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:54:55.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.549280 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 00:54:55.549385 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 00:54:55.551830 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 00:54:55.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.551907 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 00:54:55.553232 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 00:54:55.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.553385 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:54:55.555073 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 00:54:55.565000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.555107 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 00:54:55.557366 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 00:54:55.557395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:54:55.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.558880 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 00:54:55.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.558927 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 00:54:55.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.561251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 00:54:55.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.561294 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 00:54:55.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.563632 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 00:54:55.563674 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 00:54:55.567296 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 00:54:55.568553 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 00:54:55.568618 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:54:55.570494 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 00:54:55.570536 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:54:55.572307 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 00:54:55.572355 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:54:55.574035 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 00:54:55.574078 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:54:55.575851 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 00:54:55.575897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:55.598813 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 00:54:55.598949 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 00:54:55.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.601280 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 00:54:55.602719 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 00:54:55.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:55.604135 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 00:54:55.606079 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 00:54:55.638874 systemd[1]: Switching root. Jan 14 00:54:55.677549 systemd-journald[417]: Journal stopped Jan 14 00:54:56.544554 systemd-journald[417]: Received SIGTERM from PID 1 (systemd). Jan 14 00:54:56.544661 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 00:54:56.544682 kernel: SELinux: policy capability open_perms=1 Jan 14 00:54:56.544702 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 00:54:56.544717 kernel: SELinux: policy capability always_check_network=0 Jan 14 00:54:56.544727 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 00:54:56.544740 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 00:54:56.544753 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 00:54:56.544763 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 00:54:56.544777 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 00:54:56.544790 systemd[1]: Successfully loaded SELinux policy in 65.169ms. Jan 14 00:54:56.544812 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.789ms. Jan 14 00:54:56.544832 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 00:54:56.544847 systemd[1]: Detected virtualization kvm. Jan 14 00:54:56.544863 systemd[1]: Detected architecture arm64. Jan 14 00:54:56.544876 systemd[1]: Detected first boot. Jan 14 00:54:56.544887 systemd[1]: Hostname set to . Jan 14 00:54:56.544898 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 00:54:56.544909 zram_generator::config[1212]: No configuration found. Jan 14 00:54:56.544927 kernel: NET: Registered PF_VSOCK protocol family Jan 14 00:54:56.544938 systemd[1]: Populated /etc with preset unit settings. Jan 14 00:54:56.544948 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 00:54:56.544961 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 00:54:56.544972 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 00:54:56.544983 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 00:54:56.544993 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 00:54:56.545004 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 00:54:56.545014 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 00:54:56.545024 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 00:54:56.545038 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 00:54:56.545049 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 00:54:56.545062 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 00:54:56.545073 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 00:54:56.545084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 00:54:56.545095 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 00:54:56.545105 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 00:54:56.545118 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 00:54:56.545128 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 00:54:56.545139 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 00:54:56.545150 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 00:54:56.545161 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 00:54:56.545172 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 00:54:56.545183 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 00:54:56.545194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 00:54:56.545205 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 00:54:56.545216 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 00:54:56.545227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 00:54:56.545237 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 00:54:56.545249 systemd[1]: Reached target slices.target - Slice Units. Jan 14 00:54:56.545260 systemd[1]: Reached target swap.target - Swaps. Jan 14 00:54:56.545270 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 00:54:56.545281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 00:54:56.545292 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 00:54:56.545302 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 00:54:56.545313 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 00:54:56.545325 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 00:54:56.545336 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 00:54:56.545346 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 00:54:56.545357 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 00:54:56.545368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 00:54:56.545379 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 00:54:56.545389 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 00:54:56.545401 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 00:54:56.545412 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 00:54:56.545423 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 00:54:56.545434 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 00:54:56.545444 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 00:54:56.545460 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 00:54:56.545474 systemd[1]: Reached target machines.target - Containers. Jan 14 00:54:56.545486 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 00:54:56.545499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:54:56.545516 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 00:54:56.545527 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 00:54:56.545542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:54:56.545556 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:54:56.545567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:54:56.545578 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 00:54:56.545589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:54:56.545600 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 00:54:56.545614 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 00:54:56.545624 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 00:54:56.545636 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 00:54:56.545646 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 00:54:56.545657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:54:56.545668 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 00:54:56.545679 kernel: fuse: init (API version 7.41) Jan 14 00:54:56.545699 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 00:54:56.545711 kernel: ACPI: bus type drm_connector registered Jan 14 00:54:56.545727 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 00:54:56.545740 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 00:54:56.545751 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 00:54:56.545761 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 00:54:56.545772 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 00:54:56.545807 systemd-journald[1282]: Collecting audit messages is enabled. Jan 14 00:54:56.545832 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 00:54:56.545844 systemd-journald[1282]: Journal started Jan 14 00:54:56.545866 systemd-journald[1282]: Runtime Journal (/run/log/journal/69fd7a990d924050b415c1214747ec36) is 8M, max 319.5M, 311.5M free. Jan 14 00:54:56.394000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 00:54:56.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.494000 audit: BPF prog-id=14 op=UNLOAD Jan 14 00:54:56.495000 audit: BPF prog-id=13 op=UNLOAD Jan 14 00:54:56.495000 audit: BPF prog-id=15 op=LOAD Jan 14 00:54:56.495000 audit: BPF prog-id=16 op=LOAD Jan 14 00:54:56.495000 audit: BPF prog-id=17 op=LOAD Jan 14 00:54:56.541000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 00:54:56.541000 audit[1282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffcfbd4f40 a2=4000 a3=0 items=0 ppid=1 pid=1282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:54:56.541000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 00:54:56.299289 systemd[1]: Queued start job for default target multi-user.target. Jan 14 00:54:56.325140 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 00:54:56.325611 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 00:54:56.547466 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 00:54:56.549487 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 00:54:56.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.550450 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 00:54:56.551717 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 00:54:56.552854 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 00:54:56.555743 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 00:54:56.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.557061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 00:54:56.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.558475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 00:54:56.558639 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 00:54:56.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.560051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:54:56.560203 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:54:56.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.561510 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:54:56.561673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:54:56.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.563011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:54:56.563181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:54:56.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.564521 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 00:54:56.564676 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 00:54:56.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.566039 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:54:56.566196 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:54:56.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.567514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 00:54:56.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.569978 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 00:54:56.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.572191 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 00:54:56.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.573980 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 00:54:56.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.586630 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 00:54:56.588947 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 00:54:56.591156 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 00:54:56.593150 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 00:54:56.594190 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 00:54:56.594220 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 00:54:56.596074 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 00:54:56.597343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:54:56.597451 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:54:56.608887 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 00:54:56.612743 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 00:54:56.613766 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:54:56.614832 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 00:54:56.615831 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:54:56.616732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 00:54:56.619840 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 00:54:56.622877 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 00:54:56.627988 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 00:54:56.629506 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 00:54:56.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.630858 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 00:54:56.633381 systemd-journald[1282]: Time spent on flushing to /var/log/journal/69fd7a990d924050b415c1214747ec36 is 25.587ms for 1820 entries. Jan 14 00:54:56.633381 systemd-journald[1282]: System Journal (/var/log/journal/69fd7a990d924050b415c1214747ec36) is 8M, max 588.1M, 580.1M free. Jan 14 00:54:56.668912 kernel: loop1: detected capacity change from 0 to 211168 Jan 14 00:54:56.668942 systemd-journald[1282]: Received client request to flush runtime journal. Jan 14 00:54:56.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.641461 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 00:54:56.643911 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 00:54:56.649277 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 00:54:56.656063 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 14 00:54:56.656075 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 14 00:54:56.665980 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 00:54:56.667890 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 00:54:56.672728 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 00:54:56.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.678581 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 00:54:56.680703 kernel: loop2: detected capacity change from 0 to 45344 Jan 14 00:54:56.691824 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 00:54:56.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.720742 kernel: loop3: detected capacity change from 0 to 100192 Jan 14 00:54:56.721358 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 00:54:56.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.722000 audit: BPF prog-id=18 op=LOAD Jan 14 00:54:56.723000 audit: BPF prog-id=19 op=LOAD Jan 14 00:54:56.723000 audit: BPF prog-id=20 op=LOAD Jan 14 00:54:56.726842 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 00:54:56.727000 audit: BPF prog-id=21 op=LOAD Jan 14 00:54:56.729854 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 00:54:56.731672 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 00:54:56.735000 audit: BPF prog-id=22 op=LOAD Jan 14 00:54:56.735000 audit: BPF prog-id=23 op=LOAD Jan 14 00:54:56.735000 audit: BPF prog-id=24 op=LOAD Jan 14 00:54:56.736674 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 00:54:56.737000 audit: BPF prog-id=25 op=LOAD Jan 14 00:54:56.737000 audit: BPF prog-id=26 op=LOAD Jan 14 00:54:56.738000 audit: BPF prog-id=27 op=LOAD Jan 14 00:54:56.740905 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 00:54:56.758554 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 14 00:54:56.758577 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. Jan 14 00:54:56.762103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 00:54:56.764714 kernel: loop4: detected capacity change from 0 to 1648 Jan 14 00:54:56.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.775712 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 00:54:56.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.778422 systemd-nsresourced[1358]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 00:54:56.779843 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 00:54:56.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.789794 kernel: loop5: detected capacity change from 0 to 211168 Jan 14 00:54:56.811720 kernel: loop6: detected capacity change from 0 to 45344 Jan 14 00:54:56.822702 kernel: loop7: detected capacity change from 0 to 100192 Jan 14 00:54:56.833808 systemd-oomd[1355]: No swap; memory pressure usage will be degraded Jan 14 00:54:56.834238 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 00:54:56.835711 kernel: loop1: detected capacity change from 0 to 1648 Jan 14 00:54:56.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:56.838651 systemd-resolved[1356]: Positive Trust Anchors: Jan 14 00:54:56.838671 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 00:54:56.838675 systemd-resolved[1356]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 00:54:56.838771 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 00:54:56.839779 (sd-merge)[1376]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 14 00:54:56.842636 (sd-merge)[1376]: Merged extensions into '/usr'. Jan 14 00:54:56.846886 systemd[1]: Reload requested from client PID 1333 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 00:54:56.846910 systemd[1]: Reloading... Jan 14 00:54:56.847474 systemd-resolved[1356]: Using system hostname 'ci-4547-0-0-n-a666ba3d92'. Jan 14 00:54:56.895724 zram_generator::config[1407]: No configuration found. Jan 14 00:54:57.050878 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 00:54:57.051295 systemd[1]: Reloading finished in 204 ms. Jan 14 00:54:57.081195 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 00:54:57.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.082674 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 00:54:57.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.084030 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 00:54:57.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.087953 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 00:54:57.101332 systemd[1]: Starting ensure-sysext.service... Jan 14 00:54:57.103112 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 00:54:57.104000 audit: BPF prog-id=8 op=UNLOAD Jan 14 00:54:57.104000 audit: BPF prog-id=7 op=UNLOAD Jan 14 00:54:57.104000 audit: BPF prog-id=28 op=LOAD Jan 14 00:54:57.104000 audit: BPF prog-id=29 op=LOAD Jan 14 00:54:57.105452 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 00:54:57.107000 audit: BPF prog-id=30 op=LOAD Jan 14 00:54:57.107000 audit: BPF prog-id=25 op=UNLOAD Jan 14 00:54:57.107000 audit: BPF prog-id=31 op=LOAD Jan 14 00:54:57.107000 audit: BPF prog-id=32 op=LOAD Jan 14 00:54:57.107000 audit: BPF prog-id=26 op=UNLOAD Jan 14 00:54:57.107000 audit: BPF prog-id=27 op=UNLOAD Jan 14 00:54:57.108000 audit: BPF prog-id=33 op=LOAD Jan 14 00:54:57.108000 audit: BPF prog-id=18 op=UNLOAD Jan 14 00:54:57.108000 audit: BPF prog-id=34 op=LOAD Jan 14 00:54:57.108000 audit: BPF prog-id=35 op=LOAD Jan 14 00:54:57.108000 audit: BPF prog-id=19 op=UNLOAD Jan 14 00:54:57.108000 audit: BPF prog-id=20 op=UNLOAD Jan 14 00:54:57.108000 audit: BPF prog-id=36 op=LOAD Jan 14 00:54:57.108000 audit: BPF prog-id=15 op=UNLOAD Jan 14 00:54:57.108000 audit: BPF prog-id=37 op=LOAD Jan 14 00:54:57.108000 audit: BPF prog-id=38 op=LOAD Jan 14 00:54:57.108000 audit: BPF prog-id=16 op=UNLOAD Jan 14 00:54:57.108000 audit: BPF prog-id=17 op=UNLOAD Jan 14 00:54:57.109000 audit: BPF prog-id=39 op=LOAD Jan 14 00:54:57.109000 audit: BPF prog-id=21 op=UNLOAD Jan 14 00:54:57.109000 audit: BPF prog-id=40 op=LOAD Jan 14 00:54:57.109000 audit: BPF prog-id=22 op=UNLOAD Jan 14 00:54:57.109000 audit: BPF prog-id=41 op=LOAD Jan 14 00:54:57.109000 audit: BPF prog-id=42 op=LOAD Jan 14 00:54:57.109000 audit: BPF prog-id=23 op=UNLOAD Jan 14 00:54:57.109000 audit: BPF prog-id=24 op=UNLOAD Jan 14 00:54:57.114370 systemd[1]: Reload requested from client PID 1446 ('systemctl') (unit ensure-sysext.service)... Jan 14 00:54:57.114395 systemd[1]: Reloading... Jan 14 00:54:57.118787 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 00:54:57.119090 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 00:54:57.119393 systemd-tmpfiles[1447]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 00:54:57.120405 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 14 00:54:57.120575 systemd-tmpfiles[1447]: ACLs are not supported, ignoring. Jan 14 00:54:57.127356 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:54:57.127375 systemd-tmpfiles[1447]: Skipping /boot Jan 14 00:54:57.129564 systemd-udevd[1448]: Using default interface naming scheme 'v257'. Jan 14 00:54:57.135189 systemd-tmpfiles[1447]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 00:54:57.135211 systemd-tmpfiles[1447]: Skipping /boot Jan 14 00:54:57.165735 zram_generator::config[1480]: No configuration found. Jan 14 00:54:57.284764 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 00:54:57.370599 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 00:54:57.370719 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 00:54:57.372221 systemd[1]: Reloading finished in 257 ms. Jan 14 00:54:57.374742 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Jan 14 00:54:57.374812 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 14 00:54:57.374841 kernel: [drm] features: -context_init Jan 14 00:54:57.379954 kernel: [drm] number of scanouts: 1 Jan 14 00:54:57.380011 kernel: [drm] number of cap sets: 0 Jan 14 00:54:57.385681 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 00:54:57.386716 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Jan 14 00:54:57.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.389000 audit: BPF prog-id=43 op=LOAD Jan 14 00:54:57.389000 audit: BPF prog-id=40 op=UNLOAD Jan 14 00:54:57.389000 audit: BPF prog-id=44 op=LOAD Jan 14 00:54:57.389000 audit: BPF prog-id=45 op=LOAD Jan 14 00:54:57.389000 audit: BPF prog-id=41 op=UNLOAD Jan 14 00:54:57.389000 audit: BPF prog-id=42 op=UNLOAD Jan 14 00:54:57.390000 audit: BPF prog-id=46 op=LOAD Jan 14 00:54:57.390000 audit: BPF prog-id=47 op=LOAD Jan 14 00:54:57.390000 audit: BPF prog-id=28 op=UNLOAD Jan 14 00:54:57.390000 audit: BPF prog-id=29 op=UNLOAD Jan 14 00:54:57.391000 audit: BPF prog-id=48 op=LOAD Jan 14 00:54:57.391000 audit: BPF prog-id=36 op=UNLOAD Jan 14 00:54:57.391000 audit: BPF prog-id=49 op=LOAD Jan 14 00:54:57.391000 audit: BPF prog-id=50 op=LOAD Jan 14 00:54:57.391000 audit: BPF prog-id=37 op=UNLOAD Jan 14 00:54:57.391000 audit: BPF prog-id=38 op=UNLOAD Jan 14 00:54:57.391000 audit: BPF prog-id=51 op=LOAD Jan 14 00:54:57.391000 audit: BPF prog-id=39 op=UNLOAD Jan 14 00:54:57.392000 audit: BPF prog-id=52 op=LOAD Jan 14 00:54:57.392000 audit: BPF prog-id=33 op=UNLOAD Jan 14 00:54:57.392000 audit: BPF prog-id=53 op=LOAD Jan 14 00:54:57.392000 audit: BPF prog-id=54 op=LOAD Jan 14 00:54:57.392000 audit: BPF prog-id=34 op=UNLOAD Jan 14 00:54:57.392000 audit: BPF prog-id=35 op=UNLOAD Jan 14 00:54:57.393000 audit: BPF prog-id=55 op=LOAD Jan 14 00:54:57.393000 audit: BPF prog-id=30 op=UNLOAD Jan 14 00:54:57.393000 audit: BPF prog-id=56 op=LOAD Jan 14 00:54:57.393000 audit: BPF prog-id=57 op=LOAD Jan 14 00:54:57.393000 audit: BPF prog-id=31 op=UNLOAD Jan 14 00:54:57.393000 audit: BPF prog-id=32 op=UNLOAD Jan 14 00:54:57.397180 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 00:54:57.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.407715 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 00:54:57.423728 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 14 00:54:57.434803 systemd[1]: Finished ensure-sysext.service. Jan 14 00:54:57.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.443060 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 00:54:57.445497 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 00:54:57.446821 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 00:54:57.463365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 00:54:57.465520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 00:54:57.467424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 00:54:57.471846 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 00:54:57.473908 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 14 00:54:57.475253 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 00:54:57.475363 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 00:54:57.476947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 00:54:57.478929 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 00:54:57.480364 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 00:54:57.481650 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 00:54:57.484000 audit: BPF prog-id=58 op=LOAD Jan 14 00:54:57.486030 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 00:54:57.488870 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 00:54:57.490851 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 00:54:57.490908 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 00:54:57.491926 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 00:54:57.494598 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 00:54:57.496783 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 00:54:57.500138 kernel: PTP clock support registered Jan 14 00:54:57.500899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 00:54:57.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.502851 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 00:54:57.503042 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 00:54:57.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.505008 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 00:54:57.505341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 00:54:57.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.512610 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 00:54:57.516057 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 00:54:57.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.517735 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 14 00:54:57.519345 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 14 00:54:57.519000 audit[1587]: SYSTEM_BOOT pid=1587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.521965 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 00:54:57.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.525285 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 00:54:57.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.534463 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 00:54:57.534630 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 00:54:57.538893 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 00:54:57.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 00:54:57.544000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 00:54:57.544000 audit[1611]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc1ab9ad0 a2=420 a3=0 items=0 ppid=1568 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 00:54:57.544000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 00:54:57.545302 augenrules[1611]: No rules Jan 14 00:54:57.548330 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 00:54:57.548675 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 00:54:57.600550 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 00:54:57.601534 systemd-networkd[1584]: lo: Link UP Jan 14 00:54:57.601549 systemd-networkd[1584]: lo: Gained carrier Jan 14 00:54:57.602078 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 00:54:57.602992 systemd-networkd[1584]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:57.603003 systemd-networkd[1584]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 00:54:57.603024 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 00:54:57.604014 systemd-networkd[1584]: eth0: Link UP Jan 14 00:54:57.604201 systemd[1]: Reached target network.target - Network. Jan 14 00:54:57.604546 systemd-networkd[1584]: eth0: Gained carrier Jan 14 00:54:57.604568 systemd-networkd[1584]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 00:54:57.606601 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 00:54:57.609848 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 00:54:57.612989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 00:54:57.625790 systemd-networkd[1584]: eth0: DHCPv4 address 10.0.30.209/25, gateway 10.0.30.129 acquired from 10.0.30.129 Jan 14 00:54:57.634388 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 00:54:58.116108 ldconfig[1581]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 00:54:58.120525 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 00:54:58.122980 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 00:54:58.146409 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 00:54:58.147809 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 00:54:58.148899 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 00:54:58.150018 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 00:54:58.151302 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 00:54:58.152405 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 00:54:58.153614 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 00:54:58.154909 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 00:54:58.155910 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 00:54:58.157212 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 00:54:58.157243 systemd[1]: Reached target paths.target - Path Units. Jan 14 00:54:58.158076 systemd[1]: Reached target timers.target - Timer Units. Jan 14 00:54:58.161230 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 00:54:58.163572 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 00:54:58.166466 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 00:54:58.167863 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 00:54:58.168990 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 00:54:58.181107 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 00:54:58.182346 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 00:54:58.184019 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 00:54:58.185095 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 00:54:58.185976 systemd[1]: Reached target basic.target - Basic System. Jan 14 00:54:58.186857 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:54:58.186882 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 00:54:58.189644 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 00:54:58.191394 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 00:54:58.193500 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 00:54:58.196983 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 00:54:58.198669 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 00:54:58.201729 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:58.202941 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 00:54:58.206916 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 00:54:58.207870 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 00:54:58.215044 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 00:54:58.218913 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 00:54:58.222032 jq[1638]: false Jan 14 00:54:58.220824 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 00:54:58.222856 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 00:54:58.227104 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 00:54:58.228069 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 00:54:58.229915 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 00:54:58.230483 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 00:54:58.234417 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 00:54:58.237415 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 00:54:58.238893 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 00:54:58.239111 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 00:54:58.239350 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 00:54:58.239533 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 00:54:58.240510 extend-filesystems[1639]: Found /dev/vda6 Jan 14 00:54:58.244529 extend-filesystems[1639]: Found /dev/vda9 Jan 14 00:54:58.246724 extend-filesystems[1639]: Checking size of /dev/vda9 Jan 14 00:54:58.246457 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 00:54:58.250503 jq[1653]: true Jan 14 00:54:58.246842 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 00:54:58.263779 chronyd[1631]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 00:54:58.264492 jq[1670]: true Jan 14 00:54:58.266572 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 00:54:58.266392 chronyd[1631]: Loaded seccomp filter (level 2) Jan 14 00:54:58.270443 update_engine[1650]: I20260114 00:54:58.270186 1650 main.cc:92] Flatcar Update Engine starting Jan 14 00:54:58.276766 extend-filesystems[1639]: Resized partition /dev/vda9 Jan 14 00:54:58.280820 extend-filesystems[1685]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 00:54:58.288781 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 14 00:54:58.297713 tar[1669]: linux-arm64/LICENSE Jan 14 00:54:58.297713 tar[1669]: linux-arm64/helm Jan 14 00:54:58.311477 dbus-daemon[1634]: [system] SELinux support is enabled Jan 14 00:54:58.312047 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 00:54:58.318447 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 00:54:58.318482 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 00:54:58.320395 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 00:54:58.320418 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 00:54:58.321305 update_engine[1650]: I20260114 00:54:58.321244 1650 update_check_scheduler.cc:74] Next update check in 3m58s Jan 14 00:54:58.323176 systemd[1]: Started update-engine.service - Update Engine. Jan 14 00:54:58.325861 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 00:54:58.379133 systemd-logind[1648]: New seat seat0. Jan 14 00:54:58.397426 systemd-logind[1648]: Watching system buttons on /dev/input/event0 (Power Button) Jan 14 00:54:58.397451 systemd-logind[1648]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 14 00:54:58.397918 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 00:54:58.399892 locksmithd[1700]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 00:54:58.428854 bash[1701]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:54:58.430593 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 00:54:58.432355 containerd[1673]: time="2026-01-14T00:54:58Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 00:54:58.433379 containerd[1673]: time="2026-01-14T00:54:58.432780440Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 00:54:58.443961 systemd[1]: Starting sshkeys.service... Jan 14 00:54:58.447884 containerd[1673]: time="2026-01-14T00:54:58.447837760Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.88µs" Jan 14 00:54:58.447978 containerd[1673]: time="2026-01-14T00:54:58.447962680Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 00:54:58.448057 containerd[1673]: time="2026-01-14T00:54:58.448044560Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 00:54:58.448104 containerd[1673]: time="2026-01-14T00:54:58.448092720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 00:54:58.448320 containerd[1673]: time="2026-01-14T00:54:58.448300960Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 00:54:58.448382 containerd[1673]: time="2026-01-14T00:54:58.448369400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:54:58.449184 containerd[1673]: time="2026-01-14T00:54:58.449155040Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 00:54:58.449268 containerd[1673]: time="2026-01-14T00:54:58.449254920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.449871 containerd[1673]: time="2026-01-14T00:54:58.449840200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.450085 containerd[1673]: time="2026-01-14T00:54:58.450060840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:54:58.450154 containerd[1673]: time="2026-01-14T00:54:58.450140080Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 00:54:58.450254 containerd[1673]: time="2026-01-14T00:54:58.450186360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.450666 containerd[1673]: time="2026-01-14T00:54:58.450636520Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.450868 containerd[1673]: time="2026-01-14T00:54:58.450752120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 00:54:58.451039 containerd[1673]: time="2026-01-14T00:54:58.451016440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.451528 containerd[1673]: time="2026-01-14T00:54:58.451499480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.453073 containerd[1673]: time="2026-01-14T00:54:58.451751280Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 00:54:58.453157 containerd[1673]: time="2026-01-14T00:54:58.453140840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 00:54:58.453261 containerd[1673]: time="2026-01-14T00:54:58.453245560Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 00:54:58.453922 containerd[1673]: time="2026-01-14T00:54:58.453891320Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 00:54:58.454184 containerd[1673]: time="2026-01-14T00:54:58.454157960Z" level=info msg="metadata content store policy set" policy=shared Jan 14 00:54:58.462844 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 00:54:58.466865 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 00:54:58.479716 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:58.482676 containerd[1673]: time="2026-01-14T00:54:58.482606000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 00:54:58.482766 containerd[1673]: time="2026-01-14T00:54:58.482716680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:54:58.483926 containerd[1673]: time="2026-01-14T00:54:58.483881800Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 00:54:58.483926 containerd[1673]: time="2026-01-14T00:54:58.483917720Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 00:54:58.483999 containerd[1673]: time="2026-01-14T00:54:58.483936840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 00:54:58.483999 containerd[1673]: time="2026-01-14T00:54:58.483952440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 00:54:58.483999 containerd[1673]: time="2026-01-14T00:54:58.483971040Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 00:54:58.483999 containerd[1673]: time="2026-01-14T00:54:58.483982320Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 00:54:58.483999 containerd[1673]: time="2026-01-14T00:54:58.483994200Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 00:54:58.483999 containerd[1673]: time="2026-01-14T00:54:58.484008880Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 00:54:58.485100 containerd[1673]: time="2026-01-14T00:54:58.484918320Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 00:54:58.485100 containerd[1673]: time="2026-01-14T00:54:58.484965320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 00:54:58.485186 containerd[1673]: time="2026-01-14T00:54:58.485141640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 00:54:58.485186 containerd[1673]: time="2026-01-14T00:54:58.485172800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485318880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485356600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485378640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485390800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485404720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485417960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485434600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 00:54:58.485446 containerd[1673]: time="2026-01-14T00:54:58.485447360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485463120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485478400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485491200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485522280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485570560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485586040Z" level=info msg="Start snapshots syncer" Jan 14 00:54:58.486062 containerd[1673]: time="2026-01-14T00:54:58.485619080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 00:54:58.486194 containerd[1673]: time="2026-01-14T00:54:58.485953800Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 00:54:58.486194 containerd[1673]: time="2026-01-14T00:54:58.486006520Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486074560Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486172200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486200880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486213640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486227680Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486243480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486258720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486270920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486286240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 00:54:58.486297 containerd[1673]: time="2026-01-14T00:54:58.486300400Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486351360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486370160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486379480Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486392160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486403920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486415800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 00:54:58.486456 containerd[1673]: time="2026-01-14T00:54:58.486430920Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 00:54:58.487303 containerd[1673]: time="2026-01-14T00:54:58.486570680Z" level=info msg="runtime interface created" Jan 14 00:54:58.487303 containerd[1673]: time="2026-01-14T00:54:58.486581080Z" level=info msg="created NRI interface" Jan 14 00:54:58.487303 containerd[1673]: time="2026-01-14T00:54:58.486595680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 00:54:58.487303 containerd[1673]: time="2026-01-14T00:54:58.486607440Z" level=info msg="Connect containerd service" Jan 14 00:54:58.487303 containerd[1673]: time="2026-01-14T00:54:58.486635400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 00:54:58.489519 containerd[1673]: time="2026-01-14T00:54:58.489380240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 00:54:58.580858 containerd[1673]: time="2026-01-14T00:54:58.580733360Z" level=info msg="Start subscribing containerd event" Jan 14 00:54:58.580858 containerd[1673]: time="2026-01-14T00:54:58.580817280Z" level=info msg="Start recovering state" Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580914080Z" level=info msg="Start event monitor" Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580926840Z" level=info msg="Start cni network conf syncer for default" Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580934000Z" level=info msg="Start streaming server" Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580943840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580951120Z" level=info msg="runtime interface starting up..." Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580956240Z" level=info msg="starting plugins..." Jan 14 00:54:58.581020 containerd[1673]: time="2026-01-14T00:54:58.580968480Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 00:54:58.581424 containerd[1673]: time="2026-01-14T00:54:58.581383560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 00:54:58.581467 containerd[1673]: time="2026-01-14T00:54:58.581433560Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 00:54:58.581495 containerd[1673]: time="2026-01-14T00:54:58.581478520Z" level=info msg="containerd successfully booted in 0.149756s" Jan 14 00:54:58.581642 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 00:54:58.594725 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 14 00:54:58.609564 extend-filesystems[1685]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 00:54:58.609564 extend-filesystems[1685]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 14 00:54:58.609564 extend-filesystems[1685]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 14 00:54:58.613683 extend-filesystems[1639]: Resized filesystem in /dev/vda9 Jan 14 00:54:58.612319 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 00:54:58.612969 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 00:54:58.715306 tar[1669]: linux-arm64/README.md Jan 14 00:54:58.732219 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 00:54:59.041898 systemd-networkd[1584]: eth0: Gained IPv6LL Jan 14 00:54:59.043981 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 00:54:59.046913 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 00:54:59.050005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:54:59.052477 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 00:54:59.072352 sshd_keygen[1661]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 00:54:59.092102 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 00:54:59.095996 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 00:54:59.099816 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 00:54:59.112414 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 00:54:59.113729 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 00:54:59.117342 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 00:54:59.139794 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 00:54:59.144172 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 00:54:59.146670 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 00:54:59.148088 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 00:54:59.213725 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:59.491729 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:54:59.890048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:54:59.894075 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:00.422893 kubelet[1775]: E0114 00:55:00.422828 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:00.425612 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:00.425769 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:00.427776 systemd[1]: kubelet.service: Consumed 769ms CPU time, 257.8M memory peak. Jan 14 00:55:01.222733 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:55:01.499726 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:55:05.236754 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:55:05.241937 coreos-metadata[1633]: Jan 14 00:55:05.241 WARN failed to locate config-drive, using the metadata service API instead Jan 14 00:55:05.258510 coreos-metadata[1633]: Jan 14 00:55:05.258 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 14 00:55:05.513725 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 00:55:05.518680 coreos-metadata[1717]: Jan 14 00:55:05.518 WARN failed to locate config-drive, using the metadata service API instead Jan 14 00:55:05.532501 coreos-metadata[1717]: Jan 14 00:55:05.532 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 14 00:55:05.623891 coreos-metadata[1633]: Jan 14 00:55:05.623 INFO Fetch successful Jan 14 00:55:05.623891 coreos-metadata[1633]: Jan 14 00:55:05.623 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 00:55:05.783010 coreos-metadata[1717]: Jan 14 00:55:05.782 INFO Fetch successful Jan 14 00:55:05.783010 coreos-metadata[1717]: Jan 14 00:55:05.782 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 00:55:05.872721 coreos-metadata[1633]: Jan 14 00:55:05.872 INFO Fetch successful Jan 14 00:55:05.872721 coreos-metadata[1633]: Jan 14 00:55:05.872 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 14 00:55:06.031863 coreos-metadata[1717]: Jan 14 00:55:06.031 INFO Fetch successful Jan 14 00:55:06.033783 unknown[1717]: wrote ssh authorized keys file for user: core Jan 14 00:55:06.041771 coreos-metadata[1633]: Jan 14 00:55:06.041 INFO Fetch successful Jan 14 00:55:06.041771 coreos-metadata[1633]: Jan 14 00:55:06.041 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 14 00:55:06.064034 update-ssh-keys[1795]: Updated "/home/core/.ssh/authorized_keys" Jan 14 00:55:06.064620 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 00:55:06.067184 systemd[1]: Finished sshkeys.service. Jan 14 00:55:06.177128 coreos-metadata[1633]: Jan 14 00:55:06.177 INFO Fetch successful Jan 14 00:55:06.177128 coreos-metadata[1633]: Jan 14 00:55:06.177 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 14 00:55:06.312874 coreos-metadata[1633]: Jan 14 00:55:06.312 INFO Fetch successful Jan 14 00:55:06.312874 coreos-metadata[1633]: Jan 14 00:55:06.312 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 14 00:55:06.444076 coreos-metadata[1633]: Jan 14 00:55:06.444 INFO Fetch successful Jan 14 00:55:06.473448 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 00:55:06.473901 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 00:55:06.474038 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 00:55:06.474150 systemd[1]: Startup finished in 2.662s (kernel) + 13.633s (initrd) + 10.753s (userspace) = 27.048s. Jan 14 00:55:10.676628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 00:55:10.678171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:10.813863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:10.825283 (kubelet)[1811]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:10.863107 kubelet[1811]: E0114 00:55:10.863049 1811 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:10.866384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:10.866515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:10.866974 systemd[1]: kubelet.service: Consumed 144ms CPU time, 106.1M memory peak. Jan 14 00:55:20.987183 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 00:55:20.988653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:21.110059 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:21.113858 (kubelet)[1828]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:21.149719 kubelet[1828]: E0114 00:55:21.149647 1828 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:21.152366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:21.152521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:21.153875 systemd[1]: kubelet.service: Consumed 137ms CPU time, 109M memory peak. Jan 14 00:55:22.053988 chronyd[1631]: Selected source PHC0 Jan 14 00:55:31.237125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 00:55:31.239022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:31.392373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:31.395812 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:31.428235 kubelet[1845]: E0114 00:55:31.428178 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:31.430870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:31.430996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:31.431392 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.5M memory peak. Jan 14 00:55:41.486685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 00:55:41.488107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:41.648588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:41.652796 (kubelet)[1861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:41.688766 kubelet[1861]: E0114 00:55:41.688709 1861 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:41.691306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:41.691445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:41.691885 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.4M memory peak. Jan 14 00:55:43.457942 update_engine[1650]: I20260114 00:55:43.457800 1650 update_attempter.cc:509] Updating boot flags... Jan 14 00:55:51.737195 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 00:55:51.738714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:55:51.904683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:55:51.908233 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:55:51.939115 kubelet[1893]: E0114 00:55:51.939052 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:55:51.941637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:55:51.941792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:55:51.943776 systemd[1]: kubelet.service: Consumed 137ms CPU time, 105.3M memory peak. Jan 14 00:56:01.987153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 00:56:01.988498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:02.131500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:02.144153 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:02.175527 kubelet[1909]: E0114 00:56:02.175466 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:02.178003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:02.178125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:02.180180 systemd[1]: kubelet.service: Consumed 136ms CPU time, 107.3M memory peak. Jan 14 00:56:12.237178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 14 00:56:12.239037 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:12.407911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:12.411564 (kubelet)[1926]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:12.445006 kubelet[1926]: E0114 00:56:12.444930 1926 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:12.447605 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:12.447752 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:12.449924 systemd[1]: kubelet.service: Consumed 141ms CPU time, 106.1M memory peak. Jan 14 00:56:22.487057 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 14 00:56:22.488784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:22.630908 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:22.634838 (kubelet)[1942]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:22.663795 kubelet[1942]: E0114 00:56:22.663750 1942 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:22.666336 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:22.666461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:22.666833 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.3M memory peak. Jan 14 00:56:32.737236 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jan 14 00:56:32.738715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:32.907843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:32.912289 (kubelet)[1959]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:32.942386 kubelet[1959]: E0114 00:56:32.942335 1959 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:32.946743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:32.946873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:32.947410 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106.8M memory peak. Jan 14 00:56:42.987114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jan 14 00:56:42.988784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:43.144855 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:43.148584 (kubelet)[1975]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:43.183360 kubelet[1975]: E0114 00:56:43.183292 1975 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:43.186003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:43.186132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:43.187889 systemd[1]: kubelet.service: Consumed 140ms CPU time, 106.8M memory peak. Jan 14 00:56:53.237232 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Jan 14 00:56:53.238574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:56:53.400012 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:56:53.403832 (kubelet)[1992]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:56:53.434274 kubelet[1992]: E0114 00:56:53.434223 1992 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:56:53.437395 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:56:53.437528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:56:53.437917 systemd[1]: kubelet.service: Consumed 139ms CPU time, 106.9M memory peak. Jan 14 00:57:03.487109 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Jan 14 00:57:03.489406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:03.629383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:03.653037 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:03.683295 kubelet[2008]: E0114 00:57:03.683230 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:03.686065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:03.686199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:03.686532 systemd[1]: kubelet.service: Consumed 141ms CPU time, 105.7M memory peak. Jan 14 00:57:13.737391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Jan 14 00:57:13.739547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:14.122552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:14.126614 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:14.157268 kubelet[2024]: E0114 00:57:14.157222 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:14.159874 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:14.160001 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:14.161783 systemd[1]: kubelet.service: Consumed 151ms CPU time, 107.3M memory peak. Jan 14 00:57:24.237207 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Jan 14 00:57:24.238814 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:24.505300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:24.508961 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:24.538293 kubelet[2040]: E0114 00:57:24.538226 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:24.540894 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:24.541022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:24.541366 systemd[1]: kubelet.service: Consumed 136ms CPU time, 106.8M memory peak. Jan 14 00:57:34.737234 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 15. Jan 14 00:57:34.738868 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:34.954205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:34.958213 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:34.987840 kubelet[2057]: E0114 00:57:34.987749 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:34.990364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:34.990497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:34.991826 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.3M memory peak. Jan 14 00:57:45.237156 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 16. Jan 14 00:57:45.238711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:45.505772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:45.510074 (kubelet)[2073]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:45.540609 kubelet[2073]: E0114 00:57:45.540545 2073 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:45.543339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:45.543472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:45.544823 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.1M memory peak. Jan 14 00:57:55.737159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 17. Jan 14 00:57:55.738763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:57:55.945868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:57:55.949576 (kubelet)[2089]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:57:55.980228 kubelet[2089]: E0114 00:57:55.980177 2089 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:57:55.982966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:57:55.983100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:57:55.984409 systemd[1]: kubelet.service: Consumed 138ms CPU time, 105.3M memory peak. Jan 14 00:58:05.987165 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 18. Jan 14 00:58:05.988569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:06.248643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:06.252634 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:06.282959 kubelet[2105]: E0114 00:58:06.282899 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:06.285562 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:06.285716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:06.286818 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.2M memory peak. Jan 14 00:58:16.487238 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 19. Jan 14 00:58:16.489130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:16.818039 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:16.821524 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:16.851438 kubelet[2122]: E0114 00:58:16.851371 2122 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:16.853992 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:16.854128 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:16.854475 systemd[1]: kubelet.service: Consumed 138ms CPU time, 105.7M memory peak. Jan 14 00:58:26.987122 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20. Jan 14 00:58:26.988480 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:27.163201 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:27.166611 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:27.195398 kubelet[2139]: E0114 00:58:27.195342 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:27.197994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:27.198126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:27.199772 systemd[1]: kubelet.service: Consumed 136ms CPU time, 105.6M memory peak. Jan 14 00:58:37.237304 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 21. Jan 14 00:58:37.238886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:37.609288 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:37.613057 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:37.644922 kubelet[2155]: E0114 00:58:37.644869 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:37.647559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:37.647713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:37.648270 systemd[1]: kubelet.service: Consumed 143ms CPU time, 106.9M memory peak. Jan 14 00:58:47.737092 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 22. Jan 14 00:58:47.738459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:48.046050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:48.060584 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:48.090308 kubelet[2171]: E0114 00:58:48.090247 2171 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:48.092705 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:48.092834 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:48.094776 systemd[1]: kubelet.service: Consumed 137ms CPU time, 106.3M memory peak. Jan 14 00:58:56.525971 update_engine[1650]: I20260114 00:58:56.525851 1650 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 00:58:56.525971 update_engine[1650]: I20260114 00:58:56.525952 1650 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 00:58:56.526621 update_engine[1650]: I20260114 00:58:56.526336 1650 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 00:58:56.526787 update_engine[1650]: I20260114 00:58:56.526755 1650 omaha_request_params.cc:62] Current group set to alpha Jan 14 00:58:56.526866 update_engine[1650]: I20260114 00:58:56.526849 1650 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 00:58:56.526866 update_engine[1650]: I20260114 00:58:56.526860 1650 update_attempter.cc:643] Scheduling an action processor start. Jan 14 00:58:56.526916 update_engine[1650]: I20260114 00:58:56.526875 1650 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 00:58:56.527087 update_engine[1650]: I20260114 00:58:56.527070 1650 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 00:58:56.527578 update_engine[1650]: I20260114 00:58:56.527117 1650 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 00:58:56.527578 update_engine[1650]: I20260114 00:58:56.527128 1650 omaha_request_action.cc:272] Request: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: Jan 14 00:58:56.527578 update_engine[1650]: I20260114 00:58:56.527134 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:58:56.527826 locksmithd[1700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 00:58:56.528919 update_engine[1650]: I20260114 00:58:56.528884 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:58:56.529674 update_engine[1650]: I20260114 00:58:56.529636 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:58:56.538691 update_engine[1650]: E20260114 00:58:56.538643 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:58:56.538748 update_engine[1650]: I20260114 00:58:56.538734 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 00:58:58.237218 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 23. Jan 14 00:58:58.238715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:58:58.417403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:58:58.421472 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:58:58.450262 kubelet[2188]: E0114 00:58:58.450212 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:58:58.452950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:58:58.453079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:58:58.454787 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.3M memory peak. Jan 14 00:59:06.435821 update_engine[1650]: I20260114 00:59:06.435653 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:59:06.436446 update_engine[1650]: I20260114 00:59:06.435840 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:59:06.436446 update_engine[1650]: I20260114 00:59:06.436380 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:59:06.442424 update_engine[1650]: E20260114 00:59:06.442385 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:59:06.442471 update_engine[1650]: I20260114 00:59:06.442454 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 00:59:08.486855 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 24. Jan 14 00:59:08.488284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:08.760851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:08.764935 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:59:08.794648 kubelet[2204]: E0114 00:59:08.794592 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:59:08.797227 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:59:08.797472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:59:08.798828 systemd[1]: kubelet.service: Consumed 140ms CPU time, 105.8M memory peak. Jan 14 00:59:16.435934 update_engine[1650]: I20260114 00:59:16.435760 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:59:16.435934 update_engine[1650]: I20260114 00:59:16.435905 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:59:16.436774 update_engine[1650]: I20260114 00:59:16.436422 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:59:16.442419 update_engine[1650]: E20260114 00:59:16.442378 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:59:16.442468 update_engine[1650]: I20260114 00:59:16.442448 1650 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 00:59:18.987081 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 25. Jan 14 00:59:18.988415 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:19.372238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:19.375645 (kubelet)[2220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:59:19.404683 kubelet[2220]: E0114 00:59:19.404640 2220 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:59:19.407168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:59:19.407303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:59:19.408784 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.5M memory peak. Jan 14 00:59:26.435908 update_engine[1650]: I20260114 00:59:26.435757 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:59:26.435908 update_engine[1650]: I20260114 00:59:26.435904 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:59:26.436592 update_engine[1650]: I20260114 00:59:26.436525 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:59:26.441805 update_engine[1650]: E20260114 00:59:26.441758 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:59:26.441872 update_engine[1650]: I20260114 00:59:26.441823 1650 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 00:59:26.441872 update_engine[1650]: I20260114 00:59:26.441831 1650 omaha_request_action.cc:617] Omaha request response: Jan 14 00:59:26.441917 update_engine[1650]: E20260114 00:59:26.441898 1650 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 00:59:26.441917 update_engine[1650]: I20260114 00:59:26.441912 1650 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 00:59:26.441955 update_engine[1650]: I20260114 00:59:26.441917 1650 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:59:26.441955 update_engine[1650]: I20260114 00:59:26.441926 1650 update_attempter.cc:306] Processing Done. Jan 14 00:59:26.441992 update_engine[1650]: E20260114 00:59:26.441953 1650 update_attempter.cc:619] Update failed. Jan 14 00:59:26.441992 update_engine[1650]: I20260114 00:59:26.441970 1650 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 00:59:26.442031 update_engine[1650]: I20260114 00:59:26.441984 1650 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 00:59:26.442031 update_engine[1650]: I20260114 00:59:26.441998 1650 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 00:59:26.442195 update_engine[1650]: I20260114 00:59:26.442142 1650 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 00:59:26.442225 update_engine[1650]: I20260114 00:59:26.442202 1650 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 00:59:26.442225 update_engine[1650]: I20260114 00:59:26.442209 1650 omaha_request_action.cc:272] Request: Jan 14 00:59:26.442225 update_engine[1650]: Jan 14 00:59:26.442225 update_engine[1650]: Jan 14 00:59:26.442225 update_engine[1650]: Jan 14 00:59:26.442225 update_engine[1650]: Jan 14 00:59:26.442225 update_engine[1650]: Jan 14 00:59:26.442225 update_engine[1650]: Jan 14 00:59:26.442225 update_engine[1650]: I20260114 00:59:26.442214 1650 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 00:59:26.442375 update_engine[1650]: I20260114 00:59:26.442229 1650 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 00:59:26.442469 update_engine[1650]: I20260114 00:59:26.442435 1650 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 00:59:26.442756 locksmithd[1700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 00:59:26.449927 update_engine[1650]: E20260114 00:59:26.449869 1650 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 00:59:26.449927 update_engine[1650]: I20260114 00:59:26.449925 1650 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 00:59:26.449927 update_engine[1650]: I20260114 00:59:26.449934 1650 omaha_request_action.cc:617] Omaha request response: Jan 14 00:59:26.449927 update_engine[1650]: I20260114 00:59:26.449939 1650 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:59:26.450230 update_engine[1650]: I20260114 00:59:26.449944 1650 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 00:59:26.450230 update_engine[1650]: I20260114 00:59:26.449949 1650 update_attempter.cc:306] Processing Done. Jan 14 00:59:26.450230 update_engine[1650]: I20260114 00:59:26.449952 1650 update_attempter.cc:310] Error event sent. Jan 14 00:59:26.450230 update_engine[1650]: I20260114 00:59:26.449962 1650 update_check_scheduler.cc:74] Next update check in 48m15s Jan 14 00:59:26.450484 locksmithd[1700]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 00:59:29.487202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 26. Jan 14 00:59:29.488881 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:29.764915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:29.768132 (kubelet)[2236]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:59:29.798369 kubelet[2236]: E0114 00:59:29.798294 2236 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:59:29.800878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:59:29.801015 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:59:29.801359 systemd[1]: kubelet.service: Consumed 140ms CPU time, 105.3M memory peak. Jan 14 00:59:39.987069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 27. Jan 14 00:59:39.988470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:40.351924 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:40.355932 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:59:40.384416 kubelet[2253]: E0114 00:59:40.384371 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:59:40.387020 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:59:40.387145 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:59:40.387845 systemd[1]: kubelet.service: Consumed 140ms CPU time, 105.5M memory peak. Jan 14 00:59:50.487083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 28. Jan 14 00:59:50.489388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 00:59:50.723216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 00:59:50.726774 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 00:59:50.756569 kubelet[2270]: E0114 00:59:50.756478 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 00:59:50.759081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 00:59:50.759219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 00:59:50.760797 systemd[1]: kubelet.service: Consumed 136ms CPU time, 105.7M memory peak. Jan 14 00:59:57.095233 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 00:59:57.096559 systemd[1]: Started sshd@0-10.0.30.209:22-20.161.92.111:45314.service - OpenSSH per-connection server daemon (20.161.92.111:45314). Jan 14 00:59:57.649035 sshd[2280]: Accepted publickey for core from 20.161.92.111 port 45314 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 00:59:57.651379 sshd-session[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:57.661772 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 00:59:57.662707 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 00:59:57.666485 systemd-logind[1648]: New session 1 of user core. Jan 14 00:59:57.684311 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 00:59:57.686848 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 00:59:57.705307 (systemd)[2292]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:57.709320 systemd-logind[1648]: New session 2 of user core. Jan 14 00:59:57.827919 systemd[2292]: Queued start job for default target default.target. Jan 14 00:59:57.849144 systemd[2292]: Created slice app.slice - User Application Slice. Jan 14 00:59:57.849179 systemd[2292]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 00:59:57.849191 systemd[2292]: Reached target paths.target - Paths. Jan 14 00:59:57.849240 systemd[2292]: Reached target timers.target - Timers. Jan 14 00:59:57.850470 systemd[2292]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 00:59:57.851234 systemd[2292]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 00:59:57.860418 systemd[2292]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 00:59:57.860481 systemd[2292]: Reached target sockets.target - Sockets. Jan 14 00:59:57.862732 systemd[2292]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 00:59:57.862871 systemd[2292]: Reached target basic.target - Basic System. Jan 14 00:59:57.862924 systemd[2292]: Reached target default.target - Main User Target. Jan 14 00:59:57.862949 systemd[2292]: Startup finished in 149ms. Jan 14 00:59:57.863348 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 00:59:57.870952 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 00:59:58.188093 systemd[1]: Started sshd@1-10.0.30.209:22-20.161.92.111:45328.service - OpenSSH per-connection server daemon (20.161.92.111:45328). Jan 14 00:59:58.724116 sshd[2308]: Accepted publickey for core from 20.161.92.111 port 45328 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 00:59:58.725475 sshd-session[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:58.730269 systemd-logind[1648]: New session 3 of user core. Jan 14 00:59:58.738023 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 00:59:59.028862 sshd[2312]: Connection closed by 20.161.92.111 port 45328 Jan 14 00:59:59.029203 sshd-session[2308]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:59.032953 systemd[1]: sshd@1-10.0.30.209:22-20.161.92.111:45328.service: Deactivated successfully. Jan 14 00:59:59.034655 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 00:59:59.035962 systemd-logind[1648]: Session 3 logged out. Waiting for processes to exit. Jan 14 00:59:59.036845 systemd-logind[1648]: Removed session 3. Jan 14 00:59:59.139347 systemd[1]: Started sshd@2-10.0.30.209:22-20.161.92.111:45338.service - OpenSSH per-connection server daemon (20.161.92.111:45338). Jan 14 00:59:59.674736 sshd[2318]: Accepted publickey for core from 20.161.92.111 port 45338 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 00:59:59.675682 sshd-session[2318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 00:59:59.680386 systemd-logind[1648]: New session 4 of user core. Jan 14 00:59:59.691964 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 00:59:59.964886 sshd[2322]: Connection closed by 20.161.92.111 port 45338 Jan 14 00:59:59.965223 sshd-session[2318]: pam_unix(sshd:session): session closed for user core Jan 14 00:59:59.969296 systemd[1]: sshd@2-10.0.30.209:22-20.161.92.111:45338.service: Deactivated successfully. Jan 14 00:59:59.970927 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 00:59:59.973231 systemd-logind[1648]: Session 4 logged out. Waiting for processes to exit. Jan 14 00:59:59.974155 systemd-logind[1648]: Removed session 4. Jan 14 01:00:00.076808 systemd[1]: Started sshd@3-10.0.30.209:22-20.161.92.111:45342.service - OpenSSH per-connection server daemon (20.161.92.111:45342). Jan 14 01:00:00.594170 sshd[2328]: Accepted publickey for core from 20.161.92.111 port 45342 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:00:00.595515 sshd-session[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:00.599337 systemd-logind[1648]: New session 5 of user core. Jan 14 01:00:00.609899 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 01:00:00.792118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 29. Jan 14 01:00:00.793781 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:00:00.889225 sshd[2332]: Connection closed by 20.161.92.111 port 45342 Jan 14 01:00:00.889498 sshd-session[2328]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:00.893451 systemd[1]: sshd@3-10.0.30.209:22-20.161.92.111:45342.service: Deactivated successfully. Jan 14 01:00:00.895200 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 01:00:00.897320 systemd-logind[1648]: Session 5 logged out. Waiting for processes to exit. Jan 14 01:00:00.899638 systemd-logind[1648]: Removed session 5. Jan 14 01:00:00.995927 systemd[1]: Started sshd@4-10.0.30.209:22-20.161.92.111:45354.service - OpenSSH per-connection server daemon (20.161.92.111:45354). Jan 14 01:00:01.151874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:01.155326 (kubelet)[2349]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:00:01.185323 kubelet[2349]: E0114 01:00:01.185274 2349 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:00:01.187835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:00:01.187962 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:00:01.188296 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.5M memory peak. Jan 14 01:00:01.516717 sshd[2341]: Accepted publickey for core from 20.161.92.111 port 45354 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:00:01.517767 sshd-session[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:01.522355 systemd-logind[1648]: New session 6 of user core. Jan 14 01:00:01.529061 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 01:00:01.738016 sudo[2359]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 01:00:01.738280 sudo[2359]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:00:01.764908 sudo[2359]: pam_unix(sudo:session): session closed for user root Jan 14 01:00:01.862016 sshd[2358]: Connection closed by 20.161.92.111 port 45354 Jan 14 01:00:01.862243 sshd-session[2341]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:01.867087 systemd[1]: sshd@4-10.0.30.209:22-20.161.92.111:45354.service: Deactivated successfully. Jan 14 01:00:01.868874 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 01:00:01.869711 systemd-logind[1648]: Session 6 logged out. Waiting for processes to exit. Jan 14 01:00:01.872140 systemd-logind[1648]: Removed session 6. Jan 14 01:00:01.988229 systemd[1]: Started sshd@5-10.0.30.209:22-20.161.92.111:45362.service - OpenSSH per-connection server daemon (20.161.92.111:45362). Jan 14 01:00:02.544752 sshd[2366]: Accepted publickey for core from 20.161.92.111 port 45362 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:00:02.545899 sshd-session[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:02.550457 systemd-logind[1648]: New session 7 of user core. Jan 14 01:00:02.559093 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 01:00:02.749049 sudo[2372]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 01:00:02.749311 sudo[2372]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:00:02.752077 sudo[2372]: pam_unix(sudo:session): session closed for user root Jan 14 01:00:02.757593 sudo[2371]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 01:00:02.757884 sudo[2371]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:00:02.764292 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 01:00:02.811000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:00:02.813083 kernel: kauditd_printk_skb: 188 callbacks suppressed Jan 14 01:00:02.813146 kernel: audit: type=1305 audit(1768352402.811:232): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 01:00:02.815025 augenrules[2396]: No rules Jan 14 01:00:02.811000 audit[2396]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9e81350 a2=420 a3=0 items=0 ppid=2377 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:02.818920 kernel: audit: type=1300 audit(1768352402.811:232): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff9e81350 a2=420 a3=0 items=0 ppid=2377 pid=2396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:02.819239 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 01:00:02.819464 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 01:00:02.811000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:00:02.820346 sudo[2371]: pam_unix(sudo:session): session closed for user root Jan 14 01:00:02.821681 kernel: audit: type=1327 audit(1768352402.811:232): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 01:00:02.821783 kernel: audit: type=1130 audit(1768352402.818:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.826435 kernel: audit: type=1131 audit(1768352402.818:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.826491 kernel: audit: type=1106 audit(1768352402.819:235): pid=2371 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.819000 audit[2371]: USER_END pid=2371 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.819000 audit[2371]: CRED_DISP pid=2371 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.831332 kernel: audit: type=1104 audit(1768352402.819:236): pid=2371 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.920734 sshd[2370]: Connection closed by 20.161.92.111 port 45362 Jan 14 01:00:02.920677 sshd-session[2366]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:02.921000 audit[2366]: USER_END pid=2366 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:02.924452 systemd[1]: sshd@5-10.0.30.209:22-20.161.92.111:45362.service: Deactivated successfully. Jan 14 01:00:02.921000 audit[2366]: CRED_DISP pid=2366 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:02.926093 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 01:00:02.928783 kernel: audit: type=1106 audit(1768352402.921:237): pid=2366 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:02.928846 kernel: audit: type=1104 audit(1768352402.921:238): pid=2366 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:02.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.30.209:22-20.161.92.111:45362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.932185 kernel: audit: type=1131 audit(1768352402.924:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.30.209:22-20.161.92.111:45362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:02.932304 systemd-logind[1648]: Session 7 logged out. Waiting for processes to exit. Jan 14 01:00:02.933303 systemd-logind[1648]: Removed session 7. Jan 14 01:00:03.030938 systemd[1]: Started sshd@6-10.0.30.209:22-20.161.92.111:56818.service - OpenSSH per-connection server daemon (20.161.92.111:56818). Jan 14 01:00:03.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.30.209:22-20.161.92.111:56818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:03.557000 audit[2405]: USER_ACCT pid=2405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:03.558786 sshd[2405]: Accepted publickey for core from 20.161.92.111 port 56818 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:00:03.559000 audit[2405]: CRED_ACQ pid=2405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:03.559000 audit[2405]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8e86ab0 a2=3 a3=0 items=0 ppid=1 pid=2405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:03.559000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:00:03.560487 sshd-session[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:00:03.564459 systemd-logind[1648]: New session 8 of user core. Jan 14 01:00:03.573931 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 01:00:03.575000 audit[2405]: USER_START pid=2405 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:03.576000 audit[2409]: CRED_ACQ pid=2409 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:03.758000 audit[2410]: USER_ACCT pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:03.759828 sudo[2410]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 01:00:03.759000 audit[2410]: CRED_REFR pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:03.759000 audit[2410]: USER_START pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:03.760097 sudo[2410]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 01:00:04.085055 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 01:00:04.099205 (dockerd)[2432]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 01:00:04.351763 dockerd[2432]: time="2026-01-14T01:00:04.351289488Z" level=info msg="Starting up" Jan 14 01:00:04.353034 dockerd[2432]: time="2026-01-14T01:00:04.353002933Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 01:00:04.362558 dockerd[2432]: time="2026-01-14T01:00:04.362526883Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 01:00:04.407898 dockerd[2432]: time="2026-01-14T01:00:04.407822941Z" level=info msg="Loading containers: start." Jan 14 01:00:04.424728 kernel: Initializing XFRM netlink socket Jan 14 01:00:04.478000 audit[2483]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.478000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc95fd1b0 a2=0 a3=0 items=0 ppid=2432 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.478000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:00:04.480000 audit[2485]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.480000 audit[2485]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd8ff43f0 a2=0 a3=0 items=0 ppid=2432 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.480000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:00:04.482000 audit[2487]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.482000 audit[2487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc3e58800 a2=0 a3=0 items=0 ppid=2432 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.482000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:00:04.483000 audit[2489]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.483000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7976010 a2=0 a3=0 items=0 ppid=2432 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.483000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:00:04.485000 audit[2491]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.485000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc586dfb0 a2=0 a3=0 items=0 ppid=2432 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.485000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:00:04.487000 audit[2493]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.487000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdadd6e80 a2=0 a3=0 items=0 ppid=2432 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.487000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:00:04.489000 audit[2495]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.489000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff18a5200 a2=0 a3=0 items=0 ppid=2432 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.489000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:00:04.491000 audit[2497]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.491000 audit[2497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff05714b0 a2=0 a3=0 items=0 ppid=2432 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.491000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:00:04.525000 audit[2500]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.525000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc07634b0 a2=0 a3=0 items=0 ppid=2432 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.525000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 01:00:04.527000 audit[2502]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.527000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffde74b350 a2=0 a3=0 items=0 ppid=2432 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.527000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:00:04.529000 audit[2504]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.529000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe6e6b350 a2=0 a3=0 items=0 ppid=2432 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.529000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:00:04.530000 audit[2506]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.530000 audit[2506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc6993740 a2=0 a3=0 items=0 ppid=2432 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.530000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:00:04.532000 audit[2508]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.532000 audit[2508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffe60f8d90 a2=0 a3=0 items=0 ppid=2432 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.532000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:00:04.568000 audit[2538]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.568000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe78e63c0 a2=0 a3=0 items=0 ppid=2432 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.568000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 01:00:04.570000 audit[2540]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.570000 audit[2540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe2a16440 a2=0 a3=0 items=0 ppid=2432 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.570000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 01:00:04.571000 audit[2542]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.571000 audit[2542]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca82b500 a2=0 a3=0 items=0 ppid=2432 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.571000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 01:00:04.573000 audit[2544]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.573000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce3bde70 a2=0 a3=0 items=0 ppid=2432 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.573000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 01:00:04.575000 audit[2546]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.575000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffddac0aa0 a2=0 a3=0 items=0 ppid=2432 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.575000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 01:00:04.577000 audit[2548]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.577000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd47e4520 a2=0 a3=0 items=0 ppid=2432 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.577000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:00:04.579000 audit[2550]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.579000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc614f2c0 a2=0 a3=0 items=0 ppid=2432 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.579000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:00:04.581000 audit[2552]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.581000 audit[2552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdd65cc60 a2=0 a3=0 items=0 ppid=2432 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.581000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 01:00:04.583000 audit[2554]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.583000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff4355720 a2=0 a3=0 items=0 ppid=2432 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.583000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 01:00:04.585000 audit[2556]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.585000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeadd1cb0 a2=0 a3=0 items=0 ppid=2432 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.585000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 01:00:04.586000 audit[2558]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.586000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdf901270 a2=0 a3=0 items=0 ppid=2432 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.586000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 01:00:04.588000 audit[2560]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.588000 audit[2560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffddfa0bd0 a2=0 a3=0 items=0 ppid=2432 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.588000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 01:00:04.590000 audit[2562]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.590000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffa50a170 a2=0 a3=0 items=0 ppid=2432 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.590000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 01:00:04.594000 audit[2567]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.594000 audit[2567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff4658820 a2=0 a3=0 items=0 ppid=2432 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.594000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:00:04.596000 audit[2569]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.596000 audit[2569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff1d5d710 a2=0 a3=0 items=0 ppid=2432 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.596000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:00:04.598000 audit[2571]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.598000 audit[2571]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc4c8e630 a2=0 a3=0 items=0 ppid=2432 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.598000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:00:04.600000 audit[2573]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.600000 audit[2573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe1ca2800 a2=0 a3=0 items=0 ppid=2432 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.600000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 01:00:04.602000 audit[2575]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.602000 audit[2575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffea216a10 a2=0 a3=0 items=0 ppid=2432 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.602000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 01:00:04.604000 audit[2577]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:04.604000 audit[2577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff1437310 a2=0 a3=0 items=0 ppid=2432 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 01:00:04.623000 audit[2582]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.623000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffda8dde60 a2=0 a3=0 items=0 ppid=2432 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 01:00:04.625000 audit[2584]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.625000 audit[2584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc90c7120 a2=0 a3=0 items=0 ppid=2432 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.625000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 01:00:04.633000 audit[2592]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.633000 audit[2592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd88fc280 a2=0 a3=0 items=0 ppid=2432 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 01:00:04.643000 audit[2598]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.643000 audit[2598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffde488f90 a2=0 a3=0 items=0 ppid=2432 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 01:00:04.645000 audit[2600]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.645000 audit[2600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd9529f40 a2=0 a3=0 items=0 ppid=2432 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.645000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 01:00:04.646000 audit[2602]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.646000 audit[2602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc1106020 a2=0 a3=0 items=0 ppid=2432 pid=2602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.646000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 01:00:04.648000 audit[2604]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.648000 audit[2604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc0f83710 a2=0 a3=0 items=0 ppid=2432 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.648000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 01:00:04.650000 audit[2606]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:04.650000 audit[2606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe57f3e20 a2=0 a3=0 items=0 ppid=2432 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:04.650000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 01:00:04.651554 systemd-networkd[1584]: docker0: Link UP Jan 14 01:00:04.657066 dockerd[2432]: time="2026-01-14T01:00:04.657029865Z" level=info msg="Loading containers: done." Jan 14 01:00:04.679760 dockerd[2432]: time="2026-01-14T01:00:04.679718734Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 01:00:04.679914 dockerd[2432]: time="2026-01-14T01:00:04.679803055Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 01:00:04.679991 dockerd[2432]: time="2026-01-14T01:00:04.679972695Z" level=info msg="Initializing buildkit" Jan 14 01:00:04.707506 dockerd[2432]: time="2026-01-14T01:00:04.707472619Z" level=info msg="Completed buildkit initialization" Jan 14 01:00:04.712179 dockerd[2432]: time="2026-01-14T01:00:04.712141354Z" level=info msg="Daemon has completed initialization" Jan 14 01:00:04.712391 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 01:00:04.712518 dockerd[2432]: time="2026-01-14T01:00:04.712286834Z" level=info msg="API listen on /run/docker.sock" Jan 14 01:00:04.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:05.378723 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1350394304-merged.mount: Deactivated successfully. Jan 14 01:00:06.027634 containerd[1673]: time="2026-01-14T01:00:06.027595304Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 14 01:00:06.868436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751848905.mount: Deactivated successfully. Jan 14 01:00:07.475875 containerd[1673]: time="2026-01-14T01:00:07.475826342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:07.477871 containerd[1673]: time="2026-01-14T01:00:07.477677667Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 14 01:00:07.478843 containerd[1673]: time="2026-01-14T01:00:07.478811751Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:07.482899 containerd[1673]: time="2026-01-14T01:00:07.482857883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:07.483828 containerd[1673]: time="2026-01-14T01:00:07.483797566Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.456159142s" Jan 14 01:00:07.483924 containerd[1673]: time="2026-01-14T01:00:07.483910046Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 14 01:00:07.486284 containerd[1673]: time="2026-01-14T01:00:07.486255894Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 14 01:00:08.747032 containerd[1673]: time="2026-01-14T01:00:08.746979196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:08.748975 containerd[1673]: time="2026-01-14T01:00:08.748947042Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 14 01:00:08.750194 containerd[1673]: time="2026-01-14T01:00:08.750151566Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:08.753766 containerd[1673]: time="2026-01-14T01:00:08.753682697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:08.755195 containerd[1673]: time="2026-01-14T01:00:08.755158861Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.268865487s" Jan 14 01:00:08.755195 containerd[1673]: time="2026-01-14T01:00:08.755191422Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 14 01:00:08.755822 containerd[1673]: time="2026-01-14T01:00:08.755787983Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 14 01:00:09.936028 containerd[1673]: time="2026-01-14T01:00:09.935926879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:09.937749 containerd[1673]: time="2026-01-14T01:00:09.937420084Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Jan 14 01:00:09.938934 containerd[1673]: time="2026-01-14T01:00:09.938858168Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:09.941729 containerd[1673]: time="2026-01-14T01:00:09.941624017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:09.942753 containerd[1673]: time="2026-01-14T01:00:09.942594020Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.186784357s" Jan 14 01:00:09.942753 containerd[1673]: time="2026-01-14T01:00:09.942624580Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 14 01:00:09.943032 containerd[1673]: time="2026-01-14T01:00:09.943005781Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 14 01:00:10.938998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452425718.mount: Deactivated successfully. Jan 14 01:00:11.236674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 30. Jan 14 01:00:11.238121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:00:11.323726 containerd[1673]: time="2026-01-14T01:00:11.321187804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:11.325107 containerd[1673]: time="2026-01-14T01:00:11.325038856Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Jan 14 01:00:11.327562 containerd[1673]: time="2026-01-14T01:00:11.327518824Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:11.333628 containerd[1673]: time="2026-01-14T01:00:11.333582442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:11.334222 containerd[1673]: time="2026-01-14T01:00:11.334185844Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.391147503s" Jan 14 01:00:11.334263 containerd[1673]: time="2026-01-14T01:00:11.334222164Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 14 01:00:11.334784 containerd[1673]: time="2026-01-14T01:00:11.334675846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 14 01:00:11.379129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:11.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:11.383682 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 14 01:00:11.383776 kernel: audit: type=1130 audit(1768352411.378:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:11.392360 (kubelet)[2728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 01:00:11.422109 kubelet[2728]: E0114 01:00:11.422065 2728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 01:00:11.424655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 01:00:11.424821 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 01:00:11.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:00:11.425793 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.1M memory peak. Jan 14 01:00:11.428724 kernel: audit: type=1131 audit(1768352411.425:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:00:12.325239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773624196.mount: Deactivated successfully. Jan 14 01:00:12.861703 containerd[1673]: time="2026-01-14T01:00:12.861641965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:12.862318 containerd[1673]: time="2026-01-14T01:00:12.862265126Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Jan 14 01:00:12.863311 containerd[1673]: time="2026-01-14T01:00:12.863285530Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:12.866791 containerd[1673]: time="2026-01-14T01:00:12.866709660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:12.868072 containerd[1673]: time="2026-01-14T01:00:12.868024744Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.533252298s" Jan 14 01:00:12.868114 containerd[1673]: time="2026-01-14T01:00:12.868070464Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 14 01:00:12.868829 containerd[1673]: time="2026-01-14T01:00:12.868799546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 01:00:13.408446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571266865.mount: Deactivated successfully. Jan 14 01:00:13.413616 containerd[1673]: time="2026-01-14T01:00:13.413548136Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:00:13.414673 containerd[1673]: time="2026-01-14T01:00:13.414625019Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 01:00:13.416012 containerd[1673]: time="2026-01-14T01:00:13.415976743Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:00:13.418738 containerd[1673]: time="2026-01-14T01:00:13.418657111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 01:00:13.419718 containerd[1673]: time="2026-01-14T01:00:13.419263753Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 550.430246ms" Jan 14 01:00:13.419718 containerd[1673]: time="2026-01-14T01:00:13.419299513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 14 01:00:13.419833 containerd[1673]: time="2026-01-14T01:00:13.419730595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 14 01:00:13.965658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount194113740.mount: Deactivated successfully. Jan 14 01:00:15.897263 containerd[1673]: time="2026-01-14T01:00:15.897177665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:15.898280 containerd[1673]: time="2026-01-14T01:00:15.898225429Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=68134789" Jan 14 01:00:15.899359 containerd[1673]: time="2026-01-14T01:00:15.899313792Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:15.902458 containerd[1673]: time="2026-01-14T01:00:15.902433441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:15.903402 containerd[1673]: time="2026-01-14T01:00:15.903371004Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.483611169s" Jan 14 01:00:15.903440 containerd[1673]: time="2026-01-14T01:00:15.903405124Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 14 01:00:21.351481 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:21.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:21.351640 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.1M memory peak. Jan 14 01:00:21.353716 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:00:21.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:21.356565 kernel: audit: type=1130 audit(1768352421.350:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:21.356639 kernel: audit: type=1131 audit(1768352421.350:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:21.380156 systemd[1]: Reload requested from client PID 2880 ('systemctl') (unit session-8.scope)... Jan 14 01:00:21.380173 systemd[1]: Reloading... Jan 14 01:00:21.464717 zram_generator::config[2930]: No configuration found. Jan 14 01:00:21.641016 systemd[1]: Reloading finished in 260 ms. Jan 14 01:00:21.671000 audit: BPF prog-id=63 op=LOAD Jan 14 01:00:21.671000 audit: BPF prog-id=58 op=UNLOAD Jan 14 01:00:21.673962 kernel: audit: type=1334 audit(1768352421.671:294): prog-id=63 op=LOAD Jan 14 01:00:21.674012 kernel: audit: type=1334 audit(1768352421.671:295): prog-id=58 op=UNLOAD Jan 14 01:00:21.674031 kernel: audit: type=1334 audit(1768352421.672:296): prog-id=64 op=LOAD Jan 14 01:00:21.674048 kernel: audit: type=1334 audit(1768352421.672:297): prog-id=43 op=UNLOAD Jan 14 01:00:21.672000 audit: BPF prog-id=64 op=LOAD Jan 14 01:00:21.672000 audit: BPF prog-id=43 op=UNLOAD Jan 14 01:00:21.675456 kernel: audit: type=1334 audit(1768352421.673:298): prog-id=65 op=LOAD Jan 14 01:00:21.673000 audit: BPF prog-id=65 op=LOAD Jan 14 01:00:21.674000 audit: BPF prog-id=66 op=LOAD Jan 14 01:00:21.678206 kernel: audit: type=1334 audit(1768352421.674:299): prog-id=66 op=LOAD Jan 14 01:00:21.678240 kernel: audit: type=1334 audit(1768352421.674:300): prog-id=44 op=UNLOAD Jan 14 01:00:21.674000 audit: BPF prog-id=44 op=UNLOAD Jan 14 01:00:21.679640 kernel: audit: type=1334 audit(1768352421.674:301): prog-id=45 op=UNLOAD Jan 14 01:00:21.674000 audit: BPF prog-id=45 op=UNLOAD Jan 14 01:00:21.676000 audit: BPF prog-id=67 op=LOAD Jan 14 01:00:21.676000 audit: BPF prog-id=68 op=LOAD Jan 14 01:00:21.676000 audit: BPF prog-id=46 op=UNLOAD Jan 14 01:00:21.676000 audit: BPF prog-id=47 op=UNLOAD Jan 14 01:00:21.676000 audit: BPF prog-id=69 op=LOAD Jan 14 01:00:21.680000 audit: BPF prog-id=59 op=UNLOAD Jan 14 01:00:21.680000 audit: BPF prog-id=70 op=LOAD Jan 14 01:00:21.680000 audit: BPF prog-id=51 op=UNLOAD Jan 14 01:00:21.681000 audit: BPF prog-id=71 op=LOAD Jan 14 01:00:21.681000 audit: BPF prog-id=55 op=UNLOAD Jan 14 01:00:21.681000 audit: BPF prog-id=72 op=LOAD Jan 14 01:00:21.681000 audit: BPF prog-id=73 op=LOAD Jan 14 01:00:21.681000 audit: BPF prog-id=56 op=UNLOAD Jan 14 01:00:21.681000 audit: BPF prog-id=57 op=UNLOAD Jan 14 01:00:21.682000 audit: BPF prog-id=74 op=LOAD Jan 14 01:00:21.682000 audit: BPF prog-id=52 op=UNLOAD Jan 14 01:00:21.682000 audit: BPF prog-id=75 op=LOAD Jan 14 01:00:21.682000 audit: BPF prog-id=76 op=LOAD Jan 14 01:00:21.682000 audit: BPF prog-id=53 op=UNLOAD Jan 14 01:00:21.682000 audit: BPF prog-id=54 op=UNLOAD Jan 14 01:00:21.683000 audit: BPF prog-id=77 op=LOAD Jan 14 01:00:21.683000 audit: BPF prog-id=48 op=UNLOAD Jan 14 01:00:21.683000 audit: BPF prog-id=78 op=LOAD Jan 14 01:00:21.683000 audit: BPF prog-id=79 op=LOAD Jan 14 01:00:21.683000 audit: BPF prog-id=49 op=UNLOAD Jan 14 01:00:21.683000 audit: BPF prog-id=50 op=UNLOAD Jan 14 01:00:21.684000 audit: BPF prog-id=80 op=LOAD Jan 14 01:00:21.684000 audit: BPF prog-id=60 op=UNLOAD Jan 14 01:00:21.684000 audit: BPF prog-id=81 op=LOAD Jan 14 01:00:21.684000 audit: BPF prog-id=82 op=LOAD Jan 14 01:00:21.684000 audit: BPF prog-id=61 op=UNLOAD Jan 14 01:00:21.684000 audit: BPF prog-id=62 op=UNLOAD Jan 14 01:00:21.711525 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 01:00:21.711602 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 01:00:21.712080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:21.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 01:00:21.712138 systemd[1]: kubelet.service: Consumed 98ms CPU time, 95.1M memory peak. Jan 14 01:00:21.713565 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:00:22.555505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:22.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:22.565158 (kubelet)[2976]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:00:22.594576 kubelet[2976]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:00:22.594576 kubelet[2976]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:00:22.594576 kubelet[2976]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:00:22.594916 kubelet[2976]: I0114 01:00:22.594623 2976 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:00:23.124713 kubelet[2976]: I0114 01:00:23.124561 2976 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:00:23.124713 kubelet[2976]: I0114 01:00:23.124590 2976 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:00:23.125075 kubelet[2976]: I0114 01:00:23.125055 2976 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:00:23.168487 kubelet[2976]: E0114 01:00:23.168448 2976 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.30.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.30.209:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 14 01:00:23.169708 kubelet[2976]: I0114 01:00:23.169502 2976 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:00:23.179626 kubelet[2976]: I0114 01:00:23.179604 2976 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:00:23.182226 kubelet[2976]: I0114 01:00:23.182199 2976 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:00:23.182542 kubelet[2976]: I0114 01:00:23.182470 2976 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:00:23.182655 kubelet[2976]: I0114 01:00:23.182496 2976 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-a666ba3d92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:00:23.182809 kubelet[2976]: I0114 01:00:23.182763 2976 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:00:23.182809 kubelet[2976]: I0114 01:00:23.182774 2976 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:00:23.182984 kubelet[2976]: I0114 01:00:23.182965 2976 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:00:23.187607 kubelet[2976]: I0114 01:00:23.187571 2976 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:00:23.187607 kubelet[2976]: I0114 01:00:23.187594 2976 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:00:23.187607 kubelet[2976]: I0114 01:00:23.187617 2976 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:00:23.187864 kubelet[2976]: I0114 01:00:23.187630 2976 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:00:23.190144 kubelet[2976]: E0114 01:00:23.190035 2976 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.30.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.30.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 14 01:00:23.190144 kubelet[2976]: I0114 01:00:23.190121 2976 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:00:23.191899 kubelet[2976]: E0114 01:00:23.191831 2976 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.30.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-n-a666ba3d92&limit=500&resourceVersion=0\": dial tcp 10.0.30.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 14 01:00:23.191953 kubelet[2976]: I0114 01:00:23.191904 2976 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:00:23.192061 kubelet[2976]: W0114 01:00:23.192038 2976 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 01:00:23.195905 kubelet[2976]: I0114 01:00:23.195875 2976 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:00:23.195973 kubelet[2976]: I0114 01:00:23.195916 2976 server.go:1289] "Started kubelet" Jan 14 01:00:23.196860 kubelet[2976]: I0114 01:00:23.196663 2976 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:00:23.199355 kubelet[2976]: I0114 01:00:23.199290 2976 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:00:23.199727 kubelet[2976]: I0114 01:00:23.199628 2976 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:00:23.201157 kubelet[2976]: I0114 01:00:23.199767 2976 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:00:23.202875 kubelet[2976]: I0114 01:00:23.202845 2976 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:00:23.203334 kubelet[2976]: I0114 01:00:23.203296 2976 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:00:23.203430 kubelet[2976]: E0114 01:00:23.203398 2976 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:00:23.203430 kubelet[2976]: I0114 01:00:23.203399 2976 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:00:23.204283 kubelet[2976]: E0114 01:00:23.204252 2976 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" Jan 14 01:00:23.204378 kubelet[2976]: E0114 01:00:23.204351 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.30.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a666ba3d92?timeout=10s\": dial tcp 10.0.30.209:6443: connect: connection refused" interval="200ms" Jan 14 01:00:23.204741 kubelet[2976]: E0114 01:00:23.204715 2976 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.30.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.30.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 14 01:00:23.204781 kubelet[2976]: I0114 01:00:23.204751 2976 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:00:23.204817 kubelet[2976]: I0114 01:00:23.204805 2976 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:00:23.205812 kubelet[2976]: E0114 01:00:23.203899 2976 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.30.209:6443/api/v1/namespaces/default/events\": dial tcp 10.0.30.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-n-a666ba3d92.188a7328e64ad914 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-a666ba3d92,UID:ci-4547-0-0-n-a666ba3d92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-a666ba3d92,},FirstTimestamp:2026-01-14 01:00:23.195891988 +0000 UTC m=+0.627537403,LastTimestamp:2026-01-14 01:00:23.195891988 +0000 UTC m=+0.627537403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-a666ba3d92,}" Jan 14 01:00:23.206123 kubelet[2976]: I0114 01:00:23.206100 2976 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:00:23.206270 kubelet[2976]: I0114 01:00:23.206250 2976 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:00:23.207425 kubelet[2976]: I0114 01:00:23.207395 2976 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:00:23.207000 audit[2993]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.207000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffee7d6200 a2=0 a3=0 items=0 ppid=2976 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.207000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:00:23.208000 audit[2994]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2994 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.208000 audit[2994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6a5cf30 a2=0 a3=0 items=0 ppid=2976 pid=2994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:00:23.210000 audit[2996]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2996 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.210000 audit[2996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffef386d70 a2=0 a3=0 items=0 ppid=2976 pid=2996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:00:23.211000 audit[2998]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.211000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffa541ea0 a2=0 a3=0 items=0 ppid=2976 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.211000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:00:23.219000 audit[3001]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.219000 audit[3001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffd08b36a0 a2=0 a3=0 items=0 ppid=2976 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 01:00:23.219949 kubelet[2976]: I0114 01:00:23.219904 2976 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:00:23.219949 kubelet[2976]: I0114 01:00:23.219917 2976 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:00:23.219949 kubelet[2976]: I0114 01:00:23.219934 2976 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:00:23.220016 kubelet[2976]: I0114 01:00:23.219973 2976 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:00:23.220000 audit[3002]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:23.220000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff44b5e60 a2=0 a3=0 items=0 ppid=2976 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.220000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 01:00:23.221749 kubelet[2976]: I0114 01:00:23.221154 2976 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:00:23.221749 kubelet[2976]: I0114 01:00:23.221173 2976 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:00:23.221749 kubelet[2976]: I0114 01:00:23.221193 2976 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:00:23.221749 kubelet[2976]: I0114 01:00:23.221201 2976 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:00:23.221749 kubelet[2976]: E0114 01:00:23.221235 2976 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:00:23.221000 audit[3003]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.221000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd28fb080 a2=0 a3=0 items=0 ppid=2976 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.221000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:00:23.222341 kubelet[2976]: I0114 01:00:23.222313 2976 policy_none.go:49] "None policy: Start" Jan 14 01:00:23.222341 kubelet[2976]: I0114 01:00:23.222339 2976 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:00:23.222394 kubelet[2976]: I0114 01:00:23.222351 2976 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:00:23.222000 audit[3005]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.222000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8035a60 a2=0 a3=0 items=0 ppid=2976 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.222000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:00:23.223000 audit[3006]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:23.223000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4f48940 a2=0 a3=0 items=0 ppid=2976 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.223000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:00:23.225830 kubelet[2976]: E0114 01:00:23.225758 2976 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.30.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.30.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 14 01:00:23.225000 audit[3008]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=3008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:23.225000 audit[3008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc849f110 a2=0 a3=0 items=0 ppid=2976 pid=3008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.225000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 01:00:23.227000 audit[3010]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=3010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:23.227000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff50f1420 a2=0 a3=0 items=0 ppid=2976 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 01:00:23.227000 audit[3011]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:23.228611 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 01:00:23.227000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd12702a0 a2=0 a3=0 items=0 ppid=2976 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.227000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 01:00:23.242898 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 01:00:23.246000 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 01:00:23.263986 kubelet[2976]: E0114 01:00:23.263959 2976 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:00:23.264509 kubelet[2976]: I0114 01:00:23.264488 2976 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:00:23.264672 kubelet[2976]: I0114 01:00:23.264595 2976 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:00:23.264918 kubelet[2976]: I0114 01:00:23.264889 2976 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:00:23.265645 kubelet[2976]: E0114 01:00:23.265623 2976 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:00:23.265746 kubelet[2976]: E0114 01:00:23.265659 2976 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-n-a666ba3d92\" not found" Jan 14 01:00:23.332878 systemd[1]: Created slice kubepods-burstable-podaf1c47d95b80b484814f121d7cb4ba09.slice - libcontainer container kubepods-burstable-podaf1c47d95b80b484814f121d7cb4ba09.slice. Jan 14 01:00:23.346793 kubelet[2976]: E0114 01:00:23.346763 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.350618 systemd[1]: Created slice kubepods-burstable-podc06683222dcf481251d781d6b970697e.slice - libcontainer container kubepods-burstable-podc06683222dcf481251d781d6b970697e.slice. Jan 14 01:00:23.352344 kubelet[2976]: E0114 01:00:23.352320 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.354960 systemd[1]: Created slice kubepods-burstable-pod6a22e3d9e0e262d52d591c73d71ea4f7.slice - libcontainer container kubepods-burstable-pod6a22e3d9e0e262d52d591c73d71ea4f7.slice. Jan 14 01:00:23.356441 kubelet[2976]: E0114 01:00:23.356398 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.366752 kubelet[2976]: I0114 01:00:23.366718 2976 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.367167 kubelet[2976]: E0114 01:00:23.367133 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.30.209:6443/api/v1/nodes\": dial tcp 10.0.30.209:6443: connect: connection refused" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.405849 kubelet[2976]: E0114 01:00:23.405745 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.30.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a666ba3d92?timeout=10s\": dial tcp 10.0.30.209:6443: connect: connection refused" interval="400ms" Jan 14 01:00:23.405849 kubelet[2976]: I0114 01:00:23.405796 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.405849 kubelet[2976]: I0114 01:00:23.405831 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.405849 kubelet[2976]: I0114 01:00:23.405851 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af1c47d95b80b484814f121d7cb4ba09-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" (UID: \"af1c47d95b80b484814f121d7cb4ba09\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.406003 kubelet[2976]: I0114 01:00:23.405867 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af1c47d95b80b484814f121d7cb4ba09-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" (UID: \"af1c47d95b80b484814f121d7cb4ba09\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.406003 kubelet[2976]: I0114 01:00:23.405912 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.406003 kubelet[2976]: I0114 01:00:23.405936 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.406003 kubelet[2976]: I0114 01:00:23.405951 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a22e3d9e0e262d52d591c73d71ea4f7-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-a666ba3d92\" (UID: \"6a22e3d9e0e262d52d591c73d71ea4f7\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.406087 kubelet[2976]: I0114 01:00:23.406002 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af1c47d95b80b484814f121d7cb4ba09-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" (UID: \"af1c47d95b80b484814f121d7cb4ba09\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.406087 kubelet[2976]: I0114 01:00:23.406031 2976 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.569164 kubelet[2976]: I0114 01:00:23.569117 2976 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.569504 kubelet[2976]: E0114 01:00:23.569469 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.30.209:6443/api/v1/nodes\": dial tcp 10.0.30.209:6443: connect: connection refused" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.648341 containerd[1673]: time="2026-01-14T01:00:23.648289815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-a666ba3d92,Uid:af1c47d95b80b484814f121d7cb4ba09,Namespace:kube-system,Attempt:0,}" Jan 14 01:00:23.653321 containerd[1673]: time="2026-01-14T01:00:23.653218830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-a666ba3d92,Uid:c06683222dcf481251d781d6b970697e,Namespace:kube-system,Attempt:0,}" Jan 14 01:00:23.658213 containerd[1673]: time="2026-01-14T01:00:23.658114405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-a666ba3d92,Uid:6a22e3d9e0e262d52d591c73d71ea4f7,Namespace:kube-system,Attempt:0,}" Jan 14 01:00:23.680612 containerd[1673]: time="2026-01-14T01:00:23.680550873Z" level=info msg="connecting to shim cd4379b2f5062c9b7dfb73c39acc3c08e46f306846809e248395342b32449a09" address="unix:///run/containerd/s/11f2bbae94ac6e3d669345f2520fd86fb837030b24b0c45f1256a98f9b88a4f9" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:23.689392 containerd[1673]: time="2026-01-14T01:00:23.689346260Z" level=info msg="connecting to shim c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc" address="unix:///run/containerd/s/7cd5e2bb2f6d4ed7c37d5d3507ae87598a9d5700dfbbff909807ff7d5ff7379a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:23.713323 containerd[1673]: time="2026-01-14T01:00:23.713275974Z" level=info msg="connecting to shim 33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb" address="unix:///run/containerd/s/fce870e94a27f5d02c6431250f80d9d956783842f49cfe549f0bfe502bb5d32b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:23.715919 systemd[1]: Started cri-containerd-c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc.scope - libcontainer container c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc. Jan 14 01:00:23.718474 systemd[1]: Started cri-containerd-cd4379b2f5062c9b7dfb73c39acc3c08e46f306846809e248395342b32449a09.scope - libcontainer container cd4379b2f5062c9b7dfb73c39acc3c08e46f306846809e248395342b32449a09. Jan 14 01:00:23.728000 audit: BPF prog-id=83 op=LOAD Jan 14 01:00:23.729000 audit: BPF prog-id=84 op=LOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.729000 audit: BPF prog-id=84 op=UNLOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.729000 audit: BPF prog-id=85 op=LOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.729000 audit: BPF prog-id=86 op=LOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.729000 audit: BPF prog-id=86 op=UNLOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.729000 audit: BPF prog-id=85 op=UNLOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.729000 audit: BPF prog-id=87 op=LOAD Jan 14 01:00:23.729000 audit[3054]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3042 pid=3054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.729000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335366530653464356262616563633234653366363733376537346666 Jan 14 01:00:23.741074 systemd[1]: Started cri-containerd-33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb.scope - libcontainer container 33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb. Jan 14 01:00:23.743000 audit: BPF prog-id=88 op=LOAD Jan 14 01:00:23.744000 audit: BPF prog-id=89 op=LOAD Jan 14 01:00:23.744000 audit[3058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.744000 audit: BPF prog-id=89 op=UNLOAD Jan 14 01:00:23.744000 audit[3058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.744000 audit: BPF prog-id=90 op=LOAD Jan 14 01:00:23.744000 audit[3058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.745000 audit: BPF prog-id=91 op=LOAD Jan 14 01:00:23.745000 audit[3058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.745000 audit: BPF prog-id=91 op=UNLOAD Jan 14 01:00:23.745000 audit[3058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.745000 audit: BPF prog-id=90 op=UNLOAD Jan 14 01:00:23.745000 audit[3058]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.745000 audit: BPF prog-id=92 op=LOAD Jan 14 01:00:23.745000 audit[3058]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3022 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364343337396232663530363263396237646662373363333961636333 Jan 14 01:00:23.751880 containerd[1673]: time="2026-01-14T01:00:23.751845492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-a666ba3d92,Uid:c06683222dcf481251d781d6b970697e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc\"" Jan 14 01:00:23.756000 audit: BPF prog-id=93 op=LOAD Jan 14 01:00:23.757000 audit: BPF prog-id=94 op=LOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.757000 audit: BPF prog-id=94 op=UNLOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.757000 audit: BPF prog-id=95 op=LOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.757000 audit: BPF prog-id=96 op=LOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.757000 audit: BPF prog-id=96 op=UNLOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.757000 audit: BPF prog-id=95 op=UNLOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.757000 audit: BPF prog-id=97 op=LOAD Jan 14 01:00:23.757000 audit[3100]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3086 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.757000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333616633386464326434386336326637346333643061393639666361 Jan 14 01:00:23.759399 containerd[1673]: time="2026-01-14T01:00:23.758998074Z" level=info msg="CreateContainer within sandbox \"c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 01:00:23.772656 containerd[1673]: time="2026-01-14T01:00:23.772591835Z" level=info msg="Container c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:23.781068 containerd[1673]: time="2026-01-14T01:00:23.781031461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-a666ba3d92,Uid:af1c47d95b80b484814f121d7cb4ba09,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd4379b2f5062c9b7dfb73c39acc3c08e46f306846809e248395342b32449a09\"" Jan 14 01:00:23.781825 containerd[1673]: time="2026-01-14T01:00:23.781800224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-a666ba3d92,Uid:6a22e3d9e0e262d52d591c73d71ea4f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb\"" Jan 14 01:00:23.782823 containerd[1673]: time="2026-01-14T01:00:23.782796347Z" level=info msg="CreateContainer within sandbox \"c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8\"" Jan 14 01:00:23.783553 containerd[1673]: time="2026-01-14T01:00:23.783527989Z" level=info msg="StartContainer for \"c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8\"" Jan 14 01:00:23.785337 containerd[1673]: time="2026-01-14T01:00:23.785258354Z" level=info msg="connecting to shim c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8" address="unix:///run/containerd/s/7cd5e2bb2f6d4ed7c37d5d3507ae87598a9d5700dfbbff909807ff7d5ff7379a" protocol=ttrpc version=3 Jan 14 01:00:23.786753 containerd[1673]: time="2026-01-14T01:00:23.786730719Z" level=info msg="CreateContainer within sandbox \"cd4379b2f5062c9b7dfb73c39acc3c08e46f306846809e248395342b32449a09\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 01:00:23.788456 containerd[1673]: time="2026-01-14T01:00:23.788421124Z" level=info msg="CreateContainer within sandbox \"33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 01:00:23.799314 containerd[1673]: time="2026-01-14T01:00:23.799266837Z" level=info msg="Container 1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:23.803508 containerd[1673]: time="2026-01-14T01:00:23.803460290Z" level=info msg="Container c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:23.803898 systemd[1]: Started cri-containerd-c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8.scope - libcontainer container c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8. Jan 14 01:00:23.806341 kubelet[2976]: E0114 01:00:23.806290 2976 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.30.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a666ba3d92?timeout=10s\": dial tcp 10.0.30.209:6443: connect: connection refused" interval="800ms" Jan 14 01:00:23.808890 containerd[1673]: time="2026-01-14T01:00:23.808751226Z" level=info msg="CreateContainer within sandbox \"33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8\"" Jan 14 01:00:23.810146 containerd[1673]: time="2026-01-14T01:00:23.809940710Z" level=info msg="StartContainer for \"1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8\"" Jan 14 01:00:23.811253 containerd[1673]: time="2026-01-14T01:00:23.811229914Z" level=info msg="connecting to shim 1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8" address="unix:///run/containerd/s/fce870e94a27f5d02c6431250f80d9d956783842f49cfe549f0bfe502bb5d32b" protocol=ttrpc version=3 Jan 14 01:00:23.817285 containerd[1673]: time="2026-01-14T01:00:23.817231132Z" level=info msg="CreateContainer within sandbox \"cd4379b2f5062c9b7dfb73c39acc3c08e46f306846809e248395342b32449a09\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9\"" Jan 14 01:00:23.817699 containerd[1673]: time="2026-01-14T01:00:23.817655693Z" level=info msg="StartContainer for \"c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9\"" Jan 14 01:00:23.818647 containerd[1673]: time="2026-01-14T01:00:23.818615216Z" level=info msg="connecting to shim c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9" address="unix:///run/containerd/s/11f2bbae94ac6e3d669345f2520fd86fb837030b24b0c45f1256a98f9b88a4f9" protocol=ttrpc version=3 Jan 14 01:00:23.821000 audit: BPF prog-id=98 op=LOAD Jan 14 01:00:23.822000 audit: BPF prog-id=99 op=LOAD Jan 14 01:00:23.822000 audit[3153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.822000 audit: BPF prog-id=99 op=UNLOAD Jan 14 01:00:23.822000 audit[3153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.823000 audit: BPF prog-id=100 op=LOAD Jan 14 01:00:23.823000 audit[3153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.823000 audit: BPF prog-id=101 op=LOAD Jan 14 01:00:23.823000 audit[3153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.823000 audit: BPF prog-id=101 op=UNLOAD Jan 14 01:00:23.823000 audit[3153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.823000 audit: BPF prog-id=100 op=UNLOAD Jan 14 01:00:23.823000 audit[3153]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.823000 audit: BPF prog-id=102 op=LOAD Jan 14 01:00:23.823000 audit[3153]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3042 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334303761643334376462653436326537613165333338636665373937 Jan 14 01:00:23.832967 systemd[1]: Started cri-containerd-1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8.scope - libcontainer container 1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8. Jan 14 01:00:23.842877 systemd[1]: Started cri-containerd-c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9.scope - libcontainer container c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9. Jan 14 01:00:23.848000 audit: BPF prog-id=103 op=LOAD Jan 14 01:00:23.849000 audit: BPF prog-id=104 op=LOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.849000 audit: BPF prog-id=104 op=UNLOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.849000 audit: BPF prog-id=105 op=LOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.849000 audit: BPF prog-id=106 op=LOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.849000 audit: BPF prog-id=106 op=UNLOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.849000 audit: BPF prog-id=105 op=UNLOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.849000 audit: BPF prog-id=107 op=LOAD Jan 14 01:00:23.849000 audit[3176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3086 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137313131393031393061386562323137633535373135393833656165 Jan 14 01:00:23.856000 audit: BPF prog-id=108 op=LOAD Jan 14 01:00:23.856000 audit: BPF prog-id=109 op=LOAD Jan 14 01:00:23.856000 audit[3188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.856000 audit: BPF prog-id=109 op=UNLOAD Jan 14 01:00:23.856000 audit[3188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.856000 audit: BPF prog-id=110 op=LOAD Jan 14 01:00:23.856000 audit[3188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.856000 audit: BPF prog-id=111 op=LOAD Jan 14 01:00:23.856000 audit[3188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.857000 audit: BPF prog-id=111 op=UNLOAD Jan 14 01:00:23.857000 audit[3188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.857000 audit: BPF prog-id=110 op=UNLOAD Jan 14 01:00:23.857000 audit[3188]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.857000 audit: BPF prog-id=112 op=LOAD Jan 14 01:00:23.857000 audit[3188]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3022 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:23.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339613835623436376334653461303462336463336262346565316165 Jan 14 01:00:23.859994 containerd[1673]: time="2026-01-14T01:00:23.859924983Z" level=info msg="StartContainer for \"c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8\" returns successfully" Jan 14 01:00:23.887876 containerd[1673]: time="2026-01-14T01:00:23.887763828Z" level=info msg="StartContainer for \"1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8\" returns successfully" Jan 14 01:00:23.891967 containerd[1673]: time="2026-01-14T01:00:23.891624000Z" level=info msg="StartContainer for \"c9a85b467c4e4a04b3dc3bb4ee1aeea280d13df440a4369616fa5a3383da38f9\" returns successfully" Jan 14 01:00:23.971683 kubelet[2976]: I0114 01:00:23.971532 2976 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:23.972082 kubelet[2976]: E0114 01:00:23.971980 2976 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.30.209:6443/api/v1/nodes\": dial tcp 10.0.30.209:6443: connect: connection refused" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:24.231367 kubelet[2976]: E0114 01:00:24.231267 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:24.233512 kubelet[2976]: E0114 01:00:24.233482 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:24.236570 kubelet[2976]: E0114 01:00:24.236542 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:24.774172 kubelet[2976]: I0114 01:00:24.774133 2976 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.238382 kubelet[2976]: E0114 01:00:25.238346 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.240350 kubelet[2976]: E0114 01:00:25.240325 2976 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.632659 kubelet[2976]: E0114 01:00:25.629994 2976 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-n-a666ba3d92\" not found" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.722627 kubelet[2976]: E0114 01:00:25.722538 2976 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547-0-0-n-a666ba3d92.188a7328e64ad914 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-a666ba3d92,UID:ci-4547-0-0-n-a666ba3d92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-a666ba3d92,},FirstTimestamp:2026-01-14 01:00:23.195891988 +0000 UTC m=+0.627537403,LastTimestamp:2026-01-14 01:00:23.195891988 +0000 UTC m=+0.627537403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-a666ba3d92,}" Jan 14 01:00:25.774243 kubelet[2976]: I0114 01:00:25.774190 2976 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.774243 kubelet[2976]: E0114 01:00:25.774236 2976 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4547-0-0-n-a666ba3d92\": node \"ci-4547-0-0-n-a666ba3d92\" not found" Jan 14 01:00:25.780945 kubelet[2976]: E0114 01:00:25.780820 2976 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4547-0-0-n-a666ba3d92.188a7328e6bd1e73 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-a666ba3d92,UID:ci-4547-0-0-n-a666ba3d92,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-a666ba3d92,},FirstTimestamp:2026-01-14 01:00:23.203380851 +0000 UTC m=+0.635026266,LastTimestamp:2026-01-14 01:00:23.203380851 +0000 UTC m=+0.635026266,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-a666ba3d92,}" Jan 14 01:00:25.804931 kubelet[2976]: I0114 01:00:25.804879 2976 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.821454 kubelet[2976]: E0114 01:00:25.821281 2976 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.821454 kubelet[2976]: I0114 01:00:25.821320 2976 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.823220 kubelet[2976]: E0114 01:00:25.823191 2976 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.823220 kubelet[2976]: I0114 01:00:25.823220 2976 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:25.826133 kubelet[2976]: E0114 01:00:25.826104 2976 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-a666ba3d92\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:26.189295 kubelet[2976]: I0114 01:00:26.189253 2976 apiserver.go:52] "Watching apiserver" Jan 14 01:00:26.204939 kubelet[2976]: I0114 01:00:26.204848 2976 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:00:27.646606 systemd[1]: Reload requested from client PID 3269 ('systemctl') (unit session-8.scope)... Jan 14 01:00:27.646621 systemd[1]: Reloading... Jan 14 01:00:27.711712 zram_generator::config[3315]: No configuration found. Jan 14 01:00:27.898797 systemd[1]: Reloading finished in 251 ms. Jan 14 01:00:27.928543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:00:27.945824 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 01:00:27.946108 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:27.949180 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 14 01:00:27.949230 kernel: audit: type=1131 audit(1768352427.945:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:27.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:27.946180 systemd[1]: kubelet.service: Consumed 1.000s CPU time, 128.2M memory peak. Jan 14 01:00:27.949879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 01:00:27.949000 audit: BPF prog-id=113 op=LOAD Jan 14 01:00:27.949000 audit: BPF prog-id=77 op=UNLOAD Jan 14 01:00:27.952126 kernel: audit: type=1334 audit(1768352427.949:397): prog-id=113 op=LOAD Jan 14 01:00:27.952178 kernel: audit: type=1334 audit(1768352427.949:398): prog-id=77 op=UNLOAD Jan 14 01:00:27.950000 audit: BPF prog-id=114 op=LOAD Jan 14 01:00:27.953064 kernel: audit: type=1334 audit(1768352427.950:399): prog-id=114 op=LOAD Jan 14 01:00:27.950000 audit: BPF prog-id=115 op=LOAD Jan 14 01:00:27.953936 kernel: audit: type=1334 audit(1768352427.950:400): prog-id=115 op=LOAD Jan 14 01:00:27.954001 kernel: audit: type=1334 audit(1768352427.950:401): prog-id=78 op=UNLOAD Jan 14 01:00:27.950000 audit: BPF prog-id=78 op=UNLOAD Jan 14 01:00:27.950000 audit: BPF prog-id=79 op=UNLOAD Jan 14 01:00:27.955449 kernel: audit: type=1334 audit(1768352427.950:402): prog-id=79 op=UNLOAD Jan 14 01:00:27.955536 kernel: audit: type=1334 audit(1768352427.952:403): prog-id=116 op=LOAD Jan 14 01:00:27.952000 audit: BPF prog-id=116 op=LOAD Jan 14 01:00:27.967000 audit: BPF prog-id=69 op=UNLOAD Jan 14 01:00:27.968000 audit: BPF prog-id=117 op=LOAD Jan 14 01:00:27.968000 audit: BPF prog-id=71 op=UNLOAD Jan 14 01:00:27.968000 audit: BPF prog-id=118 op=LOAD Jan 14 01:00:27.969714 kernel: audit: type=1334 audit(1768352427.967:404): prog-id=69 op=UNLOAD Jan 14 01:00:27.969740 kernel: audit: type=1334 audit(1768352427.968:405): prog-id=117 op=LOAD Jan 14 01:00:27.969000 audit: BPF prog-id=119 op=LOAD Jan 14 01:00:27.969000 audit: BPF prog-id=72 op=UNLOAD Jan 14 01:00:27.969000 audit: BPF prog-id=73 op=UNLOAD Jan 14 01:00:27.970000 audit: BPF prog-id=120 op=LOAD Jan 14 01:00:27.970000 audit: BPF prog-id=74 op=UNLOAD Jan 14 01:00:27.970000 audit: BPF prog-id=121 op=LOAD Jan 14 01:00:27.970000 audit: BPF prog-id=122 op=LOAD Jan 14 01:00:27.970000 audit: BPF prog-id=75 op=UNLOAD Jan 14 01:00:27.970000 audit: BPF prog-id=76 op=UNLOAD Jan 14 01:00:27.971000 audit: BPF prog-id=123 op=LOAD Jan 14 01:00:27.971000 audit: BPF prog-id=64 op=UNLOAD Jan 14 01:00:27.971000 audit: BPF prog-id=124 op=LOAD Jan 14 01:00:27.971000 audit: BPF prog-id=125 op=LOAD Jan 14 01:00:27.971000 audit: BPF prog-id=65 op=UNLOAD Jan 14 01:00:27.971000 audit: BPF prog-id=66 op=UNLOAD Jan 14 01:00:27.972000 audit: BPF prog-id=126 op=LOAD Jan 14 01:00:27.972000 audit: BPF prog-id=70 op=UNLOAD Jan 14 01:00:27.973000 audit: BPF prog-id=127 op=LOAD Jan 14 01:00:27.973000 audit: BPF prog-id=128 op=LOAD Jan 14 01:00:27.973000 audit: BPF prog-id=67 op=UNLOAD Jan 14 01:00:27.973000 audit: BPF prog-id=68 op=UNLOAD Jan 14 01:00:27.974000 audit: BPF prog-id=129 op=LOAD Jan 14 01:00:27.974000 audit: BPF prog-id=80 op=UNLOAD Jan 14 01:00:27.974000 audit: BPF prog-id=130 op=LOAD Jan 14 01:00:27.974000 audit: BPF prog-id=131 op=LOAD Jan 14 01:00:27.974000 audit: BPF prog-id=81 op=UNLOAD Jan 14 01:00:27.974000 audit: BPF prog-id=82 op=UNLOAD Jan 14 01:00:27.974000 audit: BPF prog-id=132 op=LOAD Jan 14 01:00:27.974000 audit: BPF prog-id=63 op=UNLOAD Jan 14 01:00:28.113647 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 01:00:28.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:28.126007 (kubelet)[3360]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 01:00:28.157232 kubelet[3360]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:00:28.157232 kubelet[3360]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 01:00:28.157232 kubelet[3360]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 01:00:28.157232 kubelet[3360]: I0114 01:00:28.157204 3360 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 01:00:28.163748 kubelet[3360]: I0114 01:00:28.163714 3360 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 14 01:00:28.163748 kubelet[3360]: I0114 01:00:28.163742 3360 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 01:00:28.163945 kubelet[3360]: I0114 01:00:28.163929 3360 server.go:956] "Client rotation is on, will bootstrap in background" Jan 14 01:00:28.165187 kubelet[3360]: I0114 01:00:28.165167 3360 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 14 01:00:28.167527 kubelet[3360]: I0114 01:00:28.167503 3360 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 01:00:28.170473 kubelet[3360]: I0114 01:00:28.170437 3360 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 01:00:28.172980 kubelet[3360]: I0114 01:00:28.172948 3360 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 01:00:28.173179 kubelet[3360]: I0114 01:00:28.173136 3360 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 01:00:28.173313 kubelet[3360]: I0114 01:00:28.173160 3360 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-a666ba3d92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 01:00:28.173313 kubelet[3360]: I0114 01:00:28.173308 3360 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 01:00:28.173313 kubelet[3360]: I0114 01:00:28.173316 3360 container_manager_linux.go:303] "Creating device plugin manager" Jan 14 01:00:28.173432 kubelet[3360]: I0114 01:00:28.173358 3360 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:00:28.173521 kubelet[3360]: I0114 01:00:28.173509 3360 kubelet.go:480] "Attempting to sync node with API server" Jan 14 01:00:28.173550 kubelet[3360]: I0114 01:00:28.173523 3360 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 01:00:28.173550 kubelet[3360]: I0114 01:00:28.173550 3360 kubelet.go:386] "Adding apiserver pod source" Jan 14 01:00:28.173605 kubelet[3360]: I0114 01:00:28.173566 3360 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 01:00:28.174423 kubelet[3360]: I0114 01:00:28.174208 3360 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 01:00:28.181535 kubelet[3360]: I0114 01:00:28.176944 3360 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 14 01:00:28.181535 kubelet[3360]: I0114 01:00:28.180277 3360 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 01:00:28.181535 kubelet[3360]: I0114 01:00:28.180338 3360 server.go:1289] "Started kubelet" Jan 14 01:00:28.181535 kubelet[3360]: I0114 01:00:28.180675 3360 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 01:00:28.181535 kubelet[3360]: I0114 01:00:28.180901 3360 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 01:00:28.181535 kubelet[3360]: I0114 01:00:28.181139 3360 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 01:00:28.184403 kubelet[3360]: I0114 01:00:28.183596 3360 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 01:00:28.184403 kubelet[3360]: I0114 01:00:28.183757 3360 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 01:00:28.184786 kubelet[3360]: I0114 01:00:28.184764 3360 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 01:00:28.185903 kubelet[3360]: I0114 01:00:28.185884 3360 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 01:00:28.186015 kubelet[3360]: E0114 01:00:28.185988 3360 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-a666ba3d92\" not found" Jan 14 01:00:28.186209 kubelet[3360]: I0114 01:00:28.186195 3360 reconciler.go:26] "Reconciler: start to sync state" Jan 14 01:00:28.186476 kubelet[3360]: I0114 01:00:28.186457 3360 server.go:317] "Adding debug handlers to kubelet server" Jan 14 01:00:28.199926 kubelet[3360]: I0114 01:00:28.199887 3360 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 01:00:28.201487 kubelet[3360]: I0114 01:00:28.201459 3360 factory.go:223] Registration of the containerd container factory successfully Jan 14 01:00:28.201487 kubelet[3360]: I0114 01:00:28.201479 3360 factory.go:223] Registration of the systemd container factory successfully Jan 14 01:00:28.203569 kubelet[3360]: E0114 01:00:28.202088 3360 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 01:00:28.215532 kubelet[3360]: I0114 01:00:28.215491 3360 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 14 01:00:28.216802 kubelet[3360]: I0114 01:00:28.216769 3360 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 14 01:00:28.216802 kubelet[3360]: I0114 01:00:28.216797 3360 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 14 01:00:28.216894 kubelet[3360]: I0114 01:00:28.216826 3360 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 01:00:28.216894 kubelet[3360]: I0114 01:00:28.216834 3360 kubelet.go:2436] "Starting kubelet main sync loop" Jan 14 01:00:28.216894 kubelet[3360]: E0114 01:00:28.216873 3360 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 01:00:28.240103 kubelet[3360]: I0114 01:00:28.240074 3360 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 01:00:28.240103 kubelet[3360]: I0114 01:00:28.240098 3360 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 01:00:28.240244 kubelet[3360]: I0114 01:00:28.240121 3360 state_mem.go:36] "Initialized new in-memory state store" Jan 14 01:00:28.240244 kubelet[3360]: I0114 01:00:28.240239 3360 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 01:00:28.240282 kubelet[3360]: I0114 01:00:28.240249 3360 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 01:00:28.240282 kubelet[3360]: I0114 01:00:28.240265 3360 policy_none.go:49] "None policy: Start" Jan 14 01:00:28.240282 kubelet[3360]: I0114 01:00:28.240274 3360 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 01:00:28.240282 kubelet[3360]: I0114 01:00:28.240282 3360 state_mem.go:35] "Initializing new in-memory state store" Jan 14 01:00:28.240656 kubelet[3360]: I0114 01:00:28.240440 3360 state_mem.go:75] "Updated machine memory state" Jan 14 01:00:28.244953 kubelet[3360]: E0114 01:00:28.244911 3360 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 14 01:00:28.246597 kubelet[3360]: I0114 01:00:28.246581 3360 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 01:00:28.246834 kubelet[3360]: I0114 01:00:28.246794 3360 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 01:00:28.247876 kubelet[3360]: I0114 01:00:28.247845 3360 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 01:00:28.248943 kubelet[3360]: E0114 01:00:28.248908 3360 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 01:00:28.318193 kubelet[3360]: I0114 01:00:28.318156 3360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.318496 kubelet[3360]: I0114 01:00:28.318405 3360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.318496 kubelet[3360]: I0114 01:00:28.318248 3360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.351250 kubelet[3360]: I0114 01:00:28.351203 3360 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.358968 kubelet[3360]: I0114 01:00:28.358937 3360 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.359071 kubelet[3360]: I0114 01:00:28.359018 3360 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.388287 kubelet[3360]: I0114 01:00:28.388162 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6a22e3d9e0e262d52d591c73d71ea4f7-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-a666ba3d92\" (UID: \"6a22e3d9e0e262d52d591c73d71ea4f7\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.388911 kubelet[3360]: I0114 01:00:28.388251 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/af1c47d95b80b484814f121d7cb4ba09-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" (UID: \"af1c47d95b80b484814f121d7cb4ba09\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.388911 kubelet[3360]: I0114 01:00:28.388591 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.388911 kubelet[3360]: I0114 01:00:28.388744 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.388911 kubelet[3360]: I0114 01:00:28.388805 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.388911 kubelet[3360]: I0114 01:00:28.388822 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/af1c47d95b80b484814f121d7cb4ba09-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" (UID: \"af1c47d95b80b484814f121d7cb4ba09\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.389050 kubelet[3360]: I0114 01:00:28.388837 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/af1c47d95b80b484814f121d7cb4ba09-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-a666ba3d92\" (UID: \"af1c47d95b80b484814f121d7cb4ba09\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.389050 kubelet[3360]: I0114 01:00:28.388853 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:28.389050 kubelet[3360]: I0114 01:00:28.388873 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c06683222dcf481251d781d6b970697e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" (UID: \"c06683222dcf481251d781d6b970697e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:29.174193 kubelet[3360]: I0114 01:00:29.174098 3360 apiserver.go:52] "Watching apiserver" Jan 14 01:00:29.186444 kubelet[3360]: I0114 01:00:29.186395 3360 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 01:00:29.228526 kubelet[3360]: I0114 01:00:29.228495 3360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:29.229332 kubelet[3360]: I0114 01:00:29.229312 3360 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:29.237554 kubelet[3360]: E0114 01:00:29.235997 3360 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-n-a666ba3d92\" already exists" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:29.237554 kubelet[3360]: E0114 01:00:29.236445 3360 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-a666ba3d92\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" Jan 14 01:00:29.260798 kubelet[3360]: I0114 01:00:29.260720 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-n-a666ba3d92" podStartSLOduration=1.260701571 podStartE2EDuration="1.260701571s" podCreationTimestamp="2026-01-14 01:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:00:29.259894808 +0000 UTC m=+1.130024663" watchObservedRunningTime="2026-01-14 01:00:29.260701571 +0000 UTC m=+1.130831426" Jan 14 01:00:29.260936 kubelet[3360]: I0114 01:00:29.260894 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-n-a666ba3d92" podStartSLOduration=1.260888811 podStartE2EDuration="1.260888811s" podCreationTimestamp="2026-01-14 01:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:00:29.250054818 +0000 UTC m=+1.120184673" watchObservedRunningTime="2026-01-14 01:00:29.260888811 +0000 UTC m=+1.131018666" Jan 14 01:00:29.283831 kubelet[3360]: I0114 01:00:29.283762 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-a666ba3d92" podStartSLOduration=1.283745561 podStartE2EDuration="1.283745561s" podCreationTimestamp="2026-01-14 01:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:00:29.271029602 +0000 UTC m=+1.141159457" watchObservedRunningTime="2026-01-14 01:00:29.283745561 +0000 UTC m=+1.153875376" Jan 14 01:00:35.049261 kubelet[3360]: I0114 01:00:35.049201 3360 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 01:00:35.050492 containerd[1673]: time="2026-01-14T01:00:35.049830149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 01:00:35.050758 kubelet[3360]: I0114 01:00:35.049998 3360 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 01:00:36.042337 systemd[1]: Created slice kubepods-besteffort-pode8d01b1e_1ce6_4483_b61b_848924b824e9.slice - libcontainer container kubepods-besteffort-pode8d01b1e_1ce6_4483_b61b_848924b824e9.slice. Jan 14 01:00:36.140968 kubelet[3360]: I0114 01:00:36.140799 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8d01b1e-1ce6-4483-b61b-848924b824e9-kube-proxy\") pod \"kube-proxy-fqwsh\" (UID: \"e8d01b1e-1ce6-4483-b61b-848924b824e9\") " pod="kube-system/kube-proxy-fqwsh" Jan 14 01:00:36.140968 kubelet[3360]: I0114 01:00:36.140851 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8d01b1e-1ce6-4483-b61b-848924b824e9-xtables-lock\") pod \"kube-proxy-fqwsh\" (UID: \"e8d01b1e-1ce6-4483-b61b-848924b824e9\") " pod="kube-system/kube-proxy-fqwsh" Jan 14 01:00:36.140968 kubelet[3360]: I0114 01:00:36.140918 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8d01b1e-1ce6-4483-b61b-848924b824e9-lib-modules\") pod \"kube-proxy-fqwsh\" (UID: \"e8d01b1e-1ce6-4483-b61b-848924b824e9\") " pod="kube-system/kube-proxy-fqwsh" Jan 14 01:00:36.140968 kubelet[3360]: I0114 01:00:36.140940 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjfcm\" (UniqueName: \"kubernetes.io/projected/e8d01b1e-1ce6-4483-b61b-848924b824e9-kube-api-access-jjfcm\") pod \"kube-proxy-fqwsh\" (UID: \"e8d01b1e-1ce6-4483-b61b-848924b824e9\") " pod="kube-system/kube-proxy-fqwsh" Jan 14 01:00:36.160261 systemd[1]: Created slice kubepods-besteffort-podfcbd49d1_6cc7_4a4f_a4c7_314217c673eb.slice - libcontainer container kubepods-besteffort-podfcbd49d1_6cc7_4a4f_a4c7_314217c673eb.slice. Jan 14 01:00:36.241313 kubelet[3360]: I0114 01:00:36.241258 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fcbd49d1-6cc7-4a4f-a4c7-314217c673eb-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hvztc\" (UID: \"fcbd49d1-6cc7-4a4f-a4c7-314217c673eb\") " pod="tigera-operator/tigera-operator-7dcd859c48-hvztc" Jan 14 01:00:36.241313 kubelet[3360]: I0114 01:00:36.241304 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwspf\" (UniqueName: \"kubernetes.io/projected/fcbd49d1-6cc7-4a4f-a4c7-314217c673eb-kube-api-access-lwspf\") pod \"tigera-operator-7dcd859c48-hvztc\" (UID: \"fcbd49d1-6cc7-4a4f-a4c7-314217c673eb\") " pod="tigera-operator/tigera-operator-7dcd859c48-hvztc" Jan 14 01:00:36.358045 containerd[1673]: time="2026-01-14T01:00:36.357949877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fqwsh,Uid:e8d01b1e-1ce6-4483-b61b-848924b824e9,Namespace:kube-system,Attempt:0,}" Jan 14 01:00:36.389740 containerd[1673]: time="2026-01-14T01:00:36.389022092Z" level=info msg="connecting to shim 24589eda9ee45a5222f011db01ac14931f4d91f77f0effd4f228839937fafcf8" address="unix:///run/containerd/s/0e7e14c98fe3e693ee305545cf09027a3d58071afc7019fb8d00feaacb23b7aa" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:36.409041 systemd[1]: Started cri-containerd-24589eda9ee45a5222f011db01ac14931f4d91f77f0effd4f228839937fafcf8.scope - libcontainer container 24589eda9ee45a5222f011db01ac14931f4d91f77f0effd4f228839937fafcf8. Jan 14 01:00:36.417000 audit: BPF prog-id=133 op=LOAD Jan 14 01:00:36.419032 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 01:00:36.419074 kernel: audit: type=1334 audit(1768352436.417:438): prog-id=133 op=LOAD Jan 14 01:00:36.419000 audit: BPF prog-id=134 op=LOAD Jan 14 01:00:36.421023 kernel: audit: type=1334 audit(1768352436.419:439): prog-id=134 op=LOAD Jan 14 01:00:36.421062 kernel: audit: type=1300 audit(1768352436.419:439): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.419000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.428155 kernel: audit: type=1327 audit(1768352436.419:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.428284 kernel: audit: type=1334 audit(1768352436.419:440): prog-id=134 op=UNLOAD Jan 14 01:00:36.419000 audit: BPF prog-id=134 op=UNLOAD Jan 14 01:00:36.429003 kernel: audit: type=1300 audit(1768352436.419:440): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.419000 audit[3435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.435908 kernel: audit: type=1327 audit(1768352436.419:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.436026 kernel: audit: type=1334 audit(1768352436.419:441): prog-id=135 op=LOAD Jan 14 01:00:36.419000 audit: BPF prog-id=135 op=LOAD Jan 14 01:00:36.419000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.440352 kernel: audit: type=1300 audit(1768352436.419:441): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.440458 kernel: audit: type=1327 audit(1768352436.419:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.419000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.420000 audit: BPF prog-id=136 op=LOAD Jan 14 01:00:36.420000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.420000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.423000 audit: BPF prog-id=136 op=UNLOAD Jan 14 01:00:36.423000 audit[3435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.423000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.424000 audit: BPF prog-id=135 op=UNLOAD Jan 14 01:00:36.424000 audit[3435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.424000 audit: BPF prog-id=137 op=LOAD Jan 14 01:00:36.424000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3423 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.424000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234353839656461396565343561353232326630313164623031616331 Jan 14 01:00:36.453986 containerd[1673]: time="2026-01-14T01:00:36.453943451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fqwsh,Uid:e8d01b1e-1ce6-4483-b61b-848924b824e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"24589eda9ee45a5222f011db01ac14931f4d91f77f0effd4f228839937fafcf8\"" Jan 14 01:00:36.459061 containerd[1673]: time="2026-01-14T01:00:36.458996546Z" level=info msg="CreateContainer within sandbox \"24589eda9ee45a5222f011db01ac14931f4d91f77f0effd4f228839937fafcf8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 01:00:36.464539 containerd[1673]: time="2026-01-14T01:00:36.464498403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hvztc,Uid:fcbd49d1-6cc7-4a4f-a4c7-314217c673eb,Namespace:tigera-operator,Attempt:0,}" Jan 14 01:00:36.473464 containerd[1673]: time="2026-01-14T01:00:36.473429270Z" level=info msg="Container d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:36.485589 containerd[1673]: time="2026-01-14T01:00:36.485527027Z" level=info msg="CreateContainer within sandbox \"24589eda9ee45a5222f011db01ac14931f4d91f77f0effd4f228839937fafcf8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78\"" Jan 14 01:00:36.486234 containerd[1673]: time="2026-01-14T01:00:36.486192790Z" level=info msg="StartContainer for \"d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78\"" Jan 14 01:00:36.487564 containerd[1673]: time="2026-01-14T01:00:36.487534114Z" level=info msg="connecting to shim d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78" address="unix:///run/containerd/s/0e7e14c98fe3e693ee305545cf09027a3d58071afc7019fb8d00feaacb23b7aa" protocol=ttrpc version=3 Jan 14 01:00:36.505391 containerd[1673]: time="2026-01-14T01:00:36.505328888Z" level=info msg="connecting to shim 20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b" address="unix:///run/containerd/s/944a0878fea4befa0aa9b19fcf3557c03bc26290e54253e50e84257ece81b340" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:36.506936 systemd[1]: Started cri-containerd-d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78.scope - libcontainer container d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78. Jan 14 01:00:36.528947 systemd[1]: Started cri-containerd-20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b.scope - libcontainer container 20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b. Jan 14 01:00:36.541000 audit: BPF prog-id=138 op=LOAD Jan 14 01:00:36.542000 audit: BPF prog-id=139 op=LOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.542000 audit: BPF prog-id=139 op=UNLOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.542000 audit: BPF prog-id=140 op=LOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.542000 audit: BPF prog-id=141 op=LOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.542000 audit: BPF prog-id=141 op=UNLOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.542000 audit: BPF prog-id=140 op=UNLOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.542000 audit: BPF prog-id=142 op=LOAD Jan 14 01:00:36.542000 audit[3496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3482 pid=3496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.542000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230313431643266613565356137306661613032313862383764383263 Jan 14 01:00:36.547000 audit: BPF prog-id=143 op=LOAD Jan 14 01:00:36.547000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3423 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433306262333032333534646365633836643035383165376132633063 Jan 14 01:00:36.547000 audit: BPF prog-id=144 op=LOAD Jan 14 01:00:36.547000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3423 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433306262333032333534646365633836643035383165376132633063 Jan 14 01:00:36.547000 audit: BPF prog-id=144 op=UNLOAD Jan 14 01:00:36.547000 audit[3462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433306262333032333534646365633836643035383165376132633063 Jan 14 01:00:36.547000 audit: BPF prog-id=143 op=UNLOAD Jan 14 01:00:36.547000 audit[3462]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3423 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433306262333032333534646365633836643035383165376132633063 Jan 14 01:00:36.547000 audit: BPF prog-id=145 op=LOAD Jan 14 01:00:36.547000 audit[3462]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3423 pid=3462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.547000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6433306262333032333534646365633836643035383165376132633063 Jan 14 01:00:36.572169 containerd[1673]: time="2026-01-14T01:00:36.572029053Z" level=info msg="StartContainer for \"d30bb302354dcec86d0581e7a2c0cb789e481b441ef513094650878c3d6edd78\" returns successfully" Jan 14 01:00:36.572715 containerd[1673]: time="2026-01-14T01:00:36.572355374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hvztc,Uid:fcbd49d1-6cc7-4a4f-a4c7-314217c673eb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b\"" Jan 14 01:00:36.575320 containerd[1673]: time="2026-01-14T01:00:36.575279902Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 01:00:36.712000 audit[3576]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.712000 audit[3576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec0606f0 a2=0 a3=1 items=0 ppid=3494 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.712000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:00:36.714000 audit[3577]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.714000 audit[3577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe71daea0 a2=0 a3=1 items=0 ppid=3494 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 01:00:36.715000 audit[3579]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.715000 audit[3579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda7d7eb0 a2=0 a3=1 items=0 ppid=3494 pid=3579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.715000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:00:36.716000 audit[3582]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.716000 audit[3582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc464dde0 a2=0 a3=1 items=0 ppid=3494 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 01:00:36.717000 audit[3583]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.717000 audit[3583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff7541330 a2=0 a3=1 items=0 ppid=3494 pid=3583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.717000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:00:36.719000 audit[3584]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.719000 audit[3584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff4874ae0 a2=0 a3=1 items=0 ppid=3494 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 01:00:36.816000 audit[3585]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.816000 audit[3585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd66ec180 a2=0 a3=1 items=0 ppid=3494 pid=3585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.816000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:00:36.818000 audit[3587]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.818000 audit[3587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff8ea24e0 a2=0 a3=1 items=0 ppid=3494 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.818000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 01:00:36.822000 audit[3590]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.822000 audit[3590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe4dd50f0 a2=0 a3=1 items=0 ppid=3494 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 01:00:36.823000 audit[3591]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.823000 audit[3591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1647870 a2=0 a3=1 items=0 ppid=3494 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.823000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:00:36.825000 audit[3593]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.825000 audit[3593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcd017630 a2=0 a3=1 items=0 ppid=3494 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:00:36.826000 audit[3594]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.826000 audit[3594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4a093d0 a2=0 a3=1 items=0 ppid=3494 pid=3594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:00:36.829000 audit[3596]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3596 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.829000 audit[3596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc73d0530 a2=0 a3=1 items=0 ppid=3494 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.829000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:00:36.832000 audit[3599]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.832000 audit[3599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd480ff10 a2=0 a3=1 items=0 ppid=3494 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.832000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 01:00:36.833000 audit[3600]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.833000 audit[3600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7151bf0 a2=0 a3=1 items=0 ppid=3494 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:00:36.835000 audit[3602]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.835000 audit[3602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6f206b0 a2=0 a3=1 items=0 ppid=3494 pid=3602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.835000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:00:36.836000 audit[3603]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.836000 audit[3603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc77de6b0 a2=0 a3=1 items=0 ppid=3494 pid=3603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:00:36.839000 audit[3605]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.839000 audit[3605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe1dc90a0 a2=0 a3=1 items=0 ppid=3494 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:00:36.842000 audit[3608]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3608 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.842000 audit[3608]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdb99f210 a2=0 a3=1 items=0 ppid=3494 pid=3608 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:00:36.846000 audit[3611]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.846000 audit[3611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc794c7d0 a2=0 a3=1 items=0 ppid=3494 pid=3611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:00:36.847000 audit[3612]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.847000 audit[3612]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe344780 a2=0 a3=1 items=0 ppid=3494 pid=3612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.847000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:00:36.850000 audit[3614]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.850000 audit[3614]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff73fab40 a2=0 a3=1 items=0 ppid=3494 pid=3614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:00:36.853000 audit[3617]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.853000 audit[3617]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffef530d70 a2=0 a3=1 items=0 ppid=3494 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:00:36.854000 audit[3618]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.854000 audit[3618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1942180 a2=0 a3=1 items=0 ppid=3494 pid=3618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:00:36.856000 audit[3620]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 01:00:36.856000 audit[3620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffd73d6a30 a2=0 a3=1 items=0 ppid=3494 pid=3620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.856000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:00:36.877000 audit[3626]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:36.877000 audit[3626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd01a2200 a2=0 a3=1 items=0 ppid=3494 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.877000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:36.893000 audit[3626]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3626 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:36.893000 audit[3626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd01a2200 a2=0 a3=1 items=0 ppid=3494 pid=3626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:36.894000 audit[3631]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3631 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.894000 audit[3631]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffebfb4630 a2=0 a3=1 items=0 ppid=3494 pid=3631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 01:00:36.897000 audit[3633]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3633 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.897000 audit[3633]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe779c9d0 a2=0 a3=1 items=0 ppid=3494 pid=3633 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 01:00:36.901000 audit[3636]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.901000 audit[3636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd3fc6af0 a2=0 a3=1 items=0 ppid=3494 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 01:00:36.902000 audit[3637]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3637 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.902000 audit[3637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7f27570 a2=0 a3=1 items=0 ppid=3494 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 01:00:36.905000 audit[3639]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.905000 audit[3639]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd5678eb0 a2=0 a3=1 items=0 ppid=3494 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 01:00:36.906000 audit[3640]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3640 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.906000 audit[3640]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5075120 a2=0 a3=1 items=0 ppid=3494 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 01:00:36.908000 audit[3642]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3642 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.908000 audit[3642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe9c2ccb0 a2=0 a3=1 items=0 ppid=3494 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.908000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 01:00:36.912000 audit[3645]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.912000 audit[3645]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffda87baa0 a2=0 a3=1 items=0 ppid=3494 pid=3645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.912000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 01:00:36.913000 audit[3646]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3646 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.913000 audit[3646]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4a90bd0 a2=0 a3=1 items=0 ppid=3494 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 01:00:36.915000 audit[3648]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.915000 audit[3648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcf828da0 a2=0 a3=1 items=0 ppid=3494 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 01:00:36.916000 audit[3649]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3649 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.916000 audit[3649]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe637d3c0 a2=0 a3=1 items=0 ppid=3494 pid=3649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.916000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 01:00:36.918000 audit[3651]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3651 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.918000 audit[3651]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdd1ca180 a2=0 a3=1 items=0 ppid=3494 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 01:00:36.922000 audit[3654]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3654 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.922000 audit[3654]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc25a7dc0 a2=0 a3=1 items=0 ppid=3494 pid=3654 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.922000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 01:00:36.925000 audit[3657]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.925000 audit[3657]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2392d00 a2=0 a3=1 items=0 ppid=3494 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.925000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 01:00:36.926000 audit[3658]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3658 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.926000 audit[3658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc8904350 a2=0 a3=1 items=0 ppid=3494 pid=3658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.926000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 01:00:36.929000 audit[3660]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3660 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.929000 audit[3660]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcb31b840 a2=0 a3=1 items=0 ppid=3494 pid=3660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.929000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:00:36.932000 audit[3663]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3663 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.932000 audit[3663]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc0722540 a2=0 a3=1 items=0 ppid=3494 pid=3663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 01:00:36.933000 audit[3664]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3664 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.933000 audit[3664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef48b300 a2=0 a3=1 items=0 ppid=3494 pid=3664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.933000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 01:00:36.935000 audit[3666]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.935000 audit[3666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc8748700 a2=0 a3=1 items=0 ppid=3494 pid=3666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.935000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 01:00:36.936000 audit[3667]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3667 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.936000 audit[3667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd85f13c0 a2=0 a3=1 items=0 ppid=3494 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.936000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 01:00:36.938000 audit[3669]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3669 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.938000 audit[3669]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe9a45550 a2=0 a3=1 items=0 ppid=3494 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:00:36.941000 audit[3672]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3672 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 01:00:36.941000 audit[3672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffda90a650 a2=0 a3=1 items=0 ppid=3494 pid=3672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 01:00:36.944000 audit[3674]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:00:36.944000 audit[3674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffdd102ce0 a2=0 a3=1 items=0 ppid=3494 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.944000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:36.945000 audit[3674]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3674 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 01:00:36.945000 audit[3674]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffdd102ce0 a2=0 a3=1 items=0 ppid=3494 pid=3674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:36.945000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:37.258730 kubelet[3360]: I0114 01:00:37.258281 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fqwsh" podStartSLOduration=1.258263995 podStartE2EDuration="1.258263995s" podCreationTimestamp="2026-01-14 01:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:00:37.257959914 +0000 UTC m=+9.128089769" watchObservedRunningTime="2026-01-14 01:00:37.258263995 +0000 UTC m=+9.128393850" Jan 14 01:00:38.485508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655237681.mount: Deactivated successfully. Jan 14 01:00:38.767985 containerd[1673]: time="2026-01-14T01:00:38.767929221Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:38.769602 containerd[1673]: time="2026-01-14T01:00:38.769372305Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 14 01:00:38.770849 containerd[1673]: time="2026-01-14T01:00:38.770806509Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:38.773753 containerd[1673]: time="2026-01-14T01:00:38.773417037Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:38.774209 containerd[1673]: time="2026-01-14T01:00:38.774181920Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.198867937s" Jan 14 01:00:38.774294 containerd[1673]: time="2026-01-14T01:00:38.774271560Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 01:00:38.779190 containerd[1673]: time="2026-01-14T01:00:38.779153535Z" level=info msg="CreateContainer within sandbox \"20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 01:00:38.787358 containerd[1673]: time="2026-01-14T01:00:38.787321640Z" level=info msg="Container 4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:38.794197 containerd[1673]: time="2026-01-14T01:00:38.794114581Z" level=info msg="CreateContainer within sandbox \"20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27\"" Jan 14 01:00:38.795024 containerd[1673]: time="2026-01-14T01:00:38.794630742Z" level=info msg="StartContainer for \"4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27\"" Jan 14 01:00:38.796894 containerd[1673]: time="2026-01-14T01:00:38.796562508Z" level=info msg="connecting to shim 4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27" address="unix:///run/containerd/s/944a0878fea4befa0aa9b19fcf3557c03bc26290e54253e50e84257ece81b340" protocol=ttrpc version=3 Jan 14 01:00:38.813875 systemd[1]: Started cri-containerd-4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27.scope - libcontainer container 4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27. Jan 14 01:00:38.823000 audit: BPF prog-id=146 op=LOAD Jan 14 01:00:38.823000 audit: BPF prog-id=147 op=LOAD Jan 14 01:00:38.823000 audit[3683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.823000 audit: BPF prog-id=147 op=UNLOAD Jan 14 01:00:38.823000 audit[3683]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.823000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.824000 audit: BPF prog-id=148 op=LOAD Jan 14 01:00:38.824000 audit[3683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.824000 audit: BPF prog-id=149 op=LOAD Jan 14 01:00:38.824000 audit[3683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.824000 audit: BPF prog-id=149 op=UNLOAD Jan 14 01:00:38.824000 audit[3683]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.824000 audit: BPF prog-id=148 op=UNLOAD Jan 14 01:00:38.824000 audit[3683]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.824000 audit: BPF prog-id=150 op=LOAD Jan 14 01:00:38.824000 audit[3683]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3482 pid=3683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3463386137663464373639396637613831373462663931326463326563 Jan 14 01:00:38.841188 containerd[1673]: time="2026-01-14T01:00:38.841153765Z" level=info msg="StartContainer for \"4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27\" returns successfully" Jan 14 01:00:39.604997 kubelet[3360]: I0114 01:00:39.604907 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hvztc" podStartSLOduration=1.404724923 podStartE2EDuration="3.604889745s" podCreationTimestamp="2026-01-14 01:00:36 +0000 UTC" firstStartedPulling="2026-01-14 01:00:36.574926381 +0000 UTC m=+8.445056236" lastFinishedPulling="2026-01-14 01:00:38.775091203 +0000 UTC m=+10.645221058" observedRunningTime="2026-01-14 01:00:39.261009411 +0000 UTC m=+11.131139266" watchObservedRunningTime="2026-01-14 01:00:39.604889745 +0000 UTC m=+11.475019600" Jan 14 01:00:44.081000 audit[2410]: USER_END pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:44.081810 sudo[2410]: pam_unix(sudo:session): session closed for user root Jan 14 01:00:44.087727 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 01:00:44.087822 kernel: audit: type=1106 audit(1768352444.081:518): pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:44.087855 kernel: audit: type=1104 audit(1768352444.082:519): pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:44.082000 audit[2410]: CRED_DISP pid=2410 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 01:00:44.179791 sshd[2409]: Connection closed by 20.161.92.111 port 56818 Jan 14 01:00:44.181075 sshd-session[2405]: pam_unix(sshd:session): session closed for user core Jan 14 01:00:44.184000 audit[2405]: USER_END pid=2405 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:44.190924 systemd[1]: sshd@6-10.0.30.209:22-20.161.92.111:56818.service: Deactivated successfully. Jan 14 01:00:44.184000 audit[2405]: CRED_DISP pid=2405 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:44.193089 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 01:00:44.193534 systemd[1]: session-8.scope: Consumed 7.026s CPU time, 220.6M memory peak. Jan 14 01:00:44.194653 kernel: audit: type=1106 audit(1768352444.184:520): pid=2405 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:44.194745 kernel: audit: type=1104 audit(1768352444.184:521): pid=2405 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:00:44.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.30.209:22-20.161.92.111:56818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:44.197794 kernel: audit: type=1131 audit(1768352444.190:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.30.209:22-20.161.92.111:56818 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:00:44.199042 systemd-logind[1648]: Session 8 logged out. Waiting for processes to exit. Jan 14 01:00:44.201238 systemd-logind[1648]: Removed session 8. Jan 14 01:00:45.547000 audit[3773]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:45.547000 audit[3773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5b4cd10 a2=0 a3=1 items=0 ppid=3494 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:45.553958 kernel: audit: type=1325 audit(1768352445.547:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:45.554158 kernel: audit: type=1300 audit(1768352445.547:523): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5b4cd10 a2=0 a3=1 items=0 ppid=3494 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:45.547000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:45.556796 kernel: audit: type=1327 audit(1768352445.547:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:45.558000 audit[3773]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:45.558000 audit[3773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5b4cd10 a2=0 a3=1 items=0 ppid=3494 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:45.564400 kernel: audit: type=1325 audit(1768352445.558:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3773 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:45.564484 kernel: audit: type=1300 audit(1768352445.558:524): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5b4cd10 a2=0 a3=1 items=0 ppid=3494 pid=3773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:45.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:45.573000 audit[3775]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:45.573000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffa8831d0 a2=0 a3=1 items=0 ppid=3494 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:45.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:45.581000 audit[3775]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3775 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:45.581000 audit[3775]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffa8831d0 a2=0 a3=1 items=0 ppid=3494 pid=3775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:45.581000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:48.887000 audit[3778]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:48.887000 audit[3778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffeb6ebc20 a2=0 a3=1 items=0 ppid=3494 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:48.887000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:48.894000 audit[3778]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:48.894000 audit[3778]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb6ebc20 a2=0 a3=1 items=0 ppid=3494 pid=3778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:48.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:48.913000 audit[3780]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:48.913000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffcd60cf90 a2=0 a3=1 items=0 ppid=3494 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:48.913000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:48.917000 audit[3780]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:48.917000 audit[3780]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcd60cf90 a2=0 a3=1 items=0 ppid=3494 pid=3780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:48.917000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:49.938567 kernel: kauditd_printk_skb: 19 callbacks suppressed Jan 14 01:00:49.938684 kernel: audit: type=1325 audit(1768352449.935:531): table=filter:113 family=2 entries=19 op=nft_register_rule pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:49.935000 audit[3782]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:49.935000 audit[3782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe5b4dda0 a2=0 a3=1 items=0 ppid=3494 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:49.942541 kernel: audit: type=1300 audit(1768352449.935:531): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe5b4dda0 a2=0 a3=1 items=0 ppid=3494 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:49.935000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:49.940000 audit[3782]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:49.946252 kernel: audit: type=1327 audit(1768352449.935:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:49.946314 kernel: audit: type=1325 audit(1768352449.940:532): table=nat:114 family=2 entries=12 op=nft_register_rule pid=3782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:49.940000 audit[3782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5b4dda0 a2=0 a3=1 items=0 ppid=3494 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:49.950150 kernel: audit: type=1300 audit(1768352449.940:532): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe5b4dda0 a2=0 a3=1 items=0 ppid=3494 pid=3782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:49.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:49.952773 kernel: audit: type=1327 audit(1768352449.940:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:51.276000 audit[3784]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:51.276000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffeb38d720 a2=0 a3=1 items=0 ppid=3494 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.284123 kernel: audit: type=1325 audit(1768352451.276:533): table=filter:115 family=2 entries=21 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:51.284204 kernel: audit: type=1300 audit(1768352451.276:533): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffeb38d720 a2=0 a3=1 items=0 ppid=3494 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:51.286033 kernel: audit: type=1327 audit(1768352451.276:533): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:51.290000 audit[3784]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:51.290000 audit[3784]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb38d720 a2=0 a3=1 items=0 ppid=3494 pid=3784 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.290000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:51.293715 kernel: audit: type=1325 audit(1768352451.290:534): table=nat:116 family=2 entries=12 op=nft_register_rule pid=3784 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:51.316911 systemd[1]: Created slice kubepods-besteffort-pod25b95cbc_2973_4f25_8999_17d1a9f08ac7.slice - libcontainer container kubepods-besteffort-pod25b95cbc_2973_4f25_8999_17d1a9f08ac7.slice. Jan 14 01:00:51.336192 kubelet[3360]: I0114 01:00:51.336145 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/25b95cbc-2973-4f25-8999-17d1a9f08ac7-typha-certs\") pod \"calico-typha-547c47f45d-gkbkx\" (UID: \"25b95cbc-2973-4f25-8999-17d1a9f08ac7\") " pod="calico-system/calico-typha-547c47f45d-gkbkx" Jan 14 01:00:51.336560 kubelet[3360]: I0114 01:00:51.336209 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x2ds\" (UniqueName: \"kubernetes.io/projected/25b95cbc-2973-4f25-8999-17d1a9f08ac7-kube-api-access-5x2ds\") pod \"calico-typha-547c47f45d-gkbkx\" (UID: \"25b95cbc-2973-4f25-8999-17d1a9f08ac7\") " pod="calico-system/calico-typha-547c47f45d-gkbkx" Jan 14 01:00:51.336560 kubelet[3360]: I0114 01:00:51.336233 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25b95cbc-2973-4f25-8999-17d1a9f08ac7-tigera-ca-bundle\") pod \"calico-typha-547c47f45d-gkbkx\" (UID: \"25b95cbc-2973-4f25-8999-17d1a9f08ac7\") " pod="calico-system/calico-typha-547c47f45d-gkbkx" Jan 14 01:00:51.499672 systemd[1]: Created slice kubepods-besteffort-podaf38cc54_d5c6_44d9_8eb6_236354a82cae.slice - libcontainer container kubepods-besteffort-podaf38cc54_d5c6_44d9_8eb6_236354a82cae.slice. Jan 14 01:00:51.538342 kubelet[3360]: I0114 01:00:51.538091 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-cni-net-dir\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538342 kubelet[3360]: I0114 01:00:51.538131 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q7mb\" (UniqueName: \"kubernetes.io/projected/af38cc54-d5c6-44d9-8eb6-236354a82cae-kube-api-access-8q7mb\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538342 kubelet[3360]: I0114 01:00:51.538189 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-xtables-lock\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538342 kubelet[3360]: I0114 01:00:51.538212 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-policysync\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538342 kubelet[3360]: I0114 01:00:51.538264 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af38cc54-d5c6-44d9-8eb6-236354a82cae-tigera-ca-bundle\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538603 kubelet[3360]: I0114 01:00:51.538283 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-cni-log-dir\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538603 kubelet[3360]: I0114 01:00:51.538314 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-lib-modules\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538603 kubelet[3360]: I0114 01:00:51.538331 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-var-lib-calico\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538603 kubelet[3360]: I0114 01:00:51.538347 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-var-run-calico\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538603 kubelet[3360]: I0114 01:00:51.538363 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af38cc54-d5c6-44d9-8eb6-236354a82cae-node-certs\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538744 kubelet[3360]: I0114 01:00:51.538393 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-flexvol-driver-host\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.538744 kubelet[3360]: I0114 01:00:51.538413 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af38cc54-d5c6-44d9-8eb6-236354a82cae-cni-bin-dir\") pod \"calico-node-2gfzq\" (UID: \"af38cc54-d5c6-44d9-8eb6-236354a82cae\") " pod="calico-system/calico-node-2gfzq" Jan 14 01:00:51.622240 containerd[1673]: time="2026-01-14T01:00:51.622195446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547c47f45d-gkbkx,Uid:25b95cbc-2973-4f25-8999-17d1a9f08ac7,Namespace:calico-system,Attempt:0,}" Jan 14 01:00:51.640073 kubelet[3360]: E0114 01:00:51.640037 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.640073 kubelet[3360]: W0114 01:00:51.640063 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.640251 kubelet[3360]: E0114 01:00:51.640088 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.640310 kubelet[3360]: E0114 01:00:51.640254 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.640310 kubelet[3360]: W0114 01:00:51.640263 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.640310 kubelet[3360]: E0114 01:00:51.640271 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.640507 kubelet[3360]: E0114 01:00:51.640490 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.640507 kubelet[3360]: W0114 01:00:51.640502 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.640559 kubelet[3360]: E0114 01:00:51.640511 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.640766 kubelet[3360]: E0114 01:00:51.640752 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.640766 kubelet[3360]: W0114 01:00:51.640764 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.640837 kubelet[3360]: E0114 01:00:51.640775 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.641595 kubelet[3360]: E0114 01:00:51.641569 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.641595 kubelet[3360]: W0114 01:00:51.641587 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.641673 kubelet[3360]: E0114 01:00:51.641600 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.641851 kubelet[3360]: E0114 01:00:51.641835 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.641918 kubelet[3360]: W0114 01:00:51.641851 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.641918 kubelet[3360]: E0114 01:00:51.641862 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.642105 kubelet[3360]: E0114 01:00:51.642091 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.642105 kubelet[3360]: W0114 01:00:51.642104 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.642168 kubelet[3360]: E0114 01:00:51.642116 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.642294 kubelet[3360]: E0114 01:00:51.642279 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.642348 kubelet[3360]: W0114 01:00:51.642293 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.642348 kubelet[3360]: E0114 01:00:51.642303 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.642462 kubelet[3360]: E0114 01:00:51.642448 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.642492 kubelet[3360]: W0114 01:00:51.642462 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.642492 kubelet[3360]: E0114 01:00:51.642472 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.642650 kubelet[3360]: E0114 01:00:51.642637 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.642676 kubelet[3360]: W0114 01:00:51.642650 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.642676 kubelet[3360]: E0114 01:00:51.642660 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.642869 kubelet[3360]: E0114 01:00:51.642854 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.642869 kubelet[3360]: W0114 01:00:51.642867 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.642986 kubelet[3360]: E0114 01:00:51.642878 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.643012 containerd[1673]: time="2026-01-14T01:00:51.642894509Z" level=info msg="connecting to shim 059195883ef3a1ffcfd5cf92f1136c571e55c3c754dd0f5046f9a9de23e48cc7" address="unix:///run/containerd/s/b538e4a1ceb0990509e3d3affc5953da81366d214d85db9254b7be11b98418d0" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:51.643236 kubelet[3360]: E0114 01:00:51.643221 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.643236 kubelet[3360]: W0114 01:00:51.643235 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.643292 kubelet[3360]: E0114 01:00:51.643246 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.643901 kubelet[3360]: E0114 01:00:51.643881 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.643901 kubelet[3360]: W0114 01:00:51.643896 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.643989 kubelet[3360]: E0114 01:00:51.643908 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.644104 kubelet[3360]: E0114 01:00:51.644089 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.644104 kubelet[3360]: W0114 01:00:51.644102 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.644179 kubelet[3360]: E0114 01:00:51.644111 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.644290 kubelet[3360]: E0114 01:00:51.644276 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.644290 kubelet[3360]: W0114 01:00:51.644287 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.644344 kubelet[3360]: E0114 01:00:51.644296 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.644462 kubelet[3360]: E0114 01:00:51.644449 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.644537 kubelet[3360]: W0114 01:00:51.644461 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.644537 kubelet[3360]: E0114 01:00:51.644471 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.644734 kubelet[3360]: E0114 01:00:51.644684 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.644734 kubelet[3360]: W0114 01:00:51.644733 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.644805 kubelet[3360]: E0114 01:00:51.644745 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.644920 kubelet[3360]: E0114 01:00:51.644904 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.644920 kubelet[3360]: W0114 01:00:51.644918 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.644997 kubelet[3360]: E0114 01:00:51.644927 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.645106 kubelet[3360]: E0114 01:00:51.645092 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.645106 kubelet[3360]: W0114 01:00:51.645105 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.645162 kubelet[3360]: E0114 01:00:51.645113 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.645286 kubelet[3360]: E0114 01:00:51.645275 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.645316 kubelet[3360]: W0114 01:00:51.645288 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.645316 kubelet[3360]: E0114 01:00:51.645297 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.645454 kubelet[3360]: E0114 01:00:51.645441 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.645454 kubelet[3360]: W0114 01:00:51.645452 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.645551 kubelet[3360]: E0114 01:00:51.645460 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.645625 kubelet[3360]: E0114 01:00:51.645610 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.645625 kubelet[3360]: W0114 01:00:51.645622 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.645676 kubelet[3360]: E0114 01:00:51.645631 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.645863 kubelet[3360]: E0114 01:00:51.645846 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.645863 kubelet[3360]: W0114 01:00:51.645862 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.645948 kubelet[3360]: E0114 01:00:51.645872 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.646106 kubelet[3360]: E0114 01:00:51.646091 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.646106 kubelet[3360]: W0114 01:00:51.646105 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.646156 kubelet[3360]: E0114 01:00:51.646117 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.646361 kubelet[3360]: E0114 01:00:51.646345 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.646361 kubelet[3360]: W0114 01:00:51.646359 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.646410 kubelet[3360]: E0114 01:00:51.646368 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.646543 kubelet[3360]: E0114 01:00:51.646529 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.646543 kubelet[3360]: W0114 01:00:51.646541 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.646591 kubelet[3360]: E0114 01:00:51.646550 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.646721 kubelet[3360]: E0114 01:00:51.646707 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.646721 kubelet[3360]: W0114 01:00:51.646719 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.646788 kubelet[3360]: E0114 01:00:51.646728 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.647072 kubelet[3360]: E0114 01:00:51.647047 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.647072 kubelet[3360]: W0114 01:00:51.647071 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.647129 kubelet[3360]: E0114 01:00:51.647086 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.647313 kubelet[3360]: E0114 01:00:51.647280 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.647313 kubelet[3360]: W0114 01:00:51.647311 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.647366 kubelet[3360]: E0114 01:00:51.647322 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.648278 kubelet[3360]: E0114 01:00:51.648243 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.648743 kubelet[3360]: W0114 01:00:51.648260 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.648743 kubelet[3360]: E0114 01:00:51.648378 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.650921 kubelet[3360]: E0114 01:00:51.650784 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.650921 kubelet[3360]: W0114 01:00:51.650809 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.650921 kubelet[3360]: E0114 01:00:51.650825 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.651285 kubelet[3360]: E0114 01:00:51.651036 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.651285 kubelet[3360]: W0114 01:00:51.651045 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.651285 kubelet[3360]: E0114 01:00:51.651054 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.651285 kubelet[3360]: E0114 01:00:51.651200 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.651285 kubelet[3360]: W0114 01:00:51.651208 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.651285 kubelet[3360]: E0114 01:00:51.651215 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.651642 kubelet[3360]: E0114 01:00:51.651382 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.651642 kubelet[3360]: W0114 01:00:51.651390 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.651642 kubelet[3360]: E0114 01:00:51.651399 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.652352 kubelet[3360]: E0114 01:00:51.652324 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.652352 kubelet[3360]: W0114 01:00:51.652345 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.652440 kubelet[3360]: E0114 01:00:51.652359 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.652917 kubelet[3360]: E0114 01:00:51.652575 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.652917 kubelet[3360]: W0114 01:00:51.652590 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.652917 kubelet[3360]: E0114 01:00:51.652600 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.652917 kubelet[3360]: E0114 01:00:51.652824 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.652917 kubelet[3360]: W0114 01:00:51.652834 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.652917 kubelet[3360]: E0114 01:00:51.652844 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.653126 kubelet[3360]: E0114 01:00:51.653106 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.653126 kubelet[3360]: W0114 01:00:51.653121 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.653182 kubelet[3360]: E0114 01:00:51.653131 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.653745 kubelet[3360]: E0114 01:00:51.653712 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.653745 kubelet[3360]: W0114 01:00:51.653732 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.653745 kubelet[3360]: E0114 01:00:51.653747 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.654225 kubelet[3360]: E0114 01:00:51.654196 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.654225 kubelet[3360]: W0114 01:00:51.654208 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.654225 kubelet[3360]: E0114 01:00:51.654220 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.654891 kubelet[3360]: E0114 01:00:51.654869 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.654891 kubelet[3360]: W0114 01:00:51.654887 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.654987 kubelet[3360]: E0114 01:00:51.654901 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.656176 kubelet[3360]: E0114 01:00:51.656110 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.656176 kubelet[3360]: W0114 01:00:51.656175 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.656263 kubelet[3360]: E0114 01:00:51.656191 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.663760 kubelet[3360]: E0114 01:00:51.663073 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.663760 kubelet[3360]: W0114 01:00:51.663097 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.663760 kubelet[3360]: E0114 01:00:51.663114 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.665237 kubelet[3360]: E0114 01:00:51.665214 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.666468 kubelet[3360]: W0114 01:00:51.666434 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.666585 kubelet[3360]: E0114 01:00:51.666570 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.680949 systemd[1]: Started cri-containerd-059195883ef3a1ffcfd5cf92f1136c571e55c3c754dd0f5046f9a9de23e48cc7.scope - libcontainer container 059195883ef3a1ffcfd5cf92f1136c571e55c3c754dd0f5046f9a9de23e48cc7. Jan 14 01:00:51.692836 kubelet[3360]: E0114 01:00:51.692489 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:00:51.700000 audit: BPF prog-id=151 op=LOAD Jan 14 01:00:51.700000 audit: BPF prog-id=152 op=LOAD Jan 14 01:00:51.700000 audit[3837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.701000 audit: BPF prog-id=152 op=UNLOAD Jan 14 01:00:51.701000 audit[3837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.701000 audit: BPF prog-id=153 op=LOAD Jan 14 01:00:51.701000 audit[3837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.701000 audit: BPF prog-id=154 op=LOAD Jan 14 01:00:51.701000 audit[3837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.701000 audit: BPF prog-id=154 op=UNLOAD Jan 14 01:00:51.701000 audit[3837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.701000 audit: BPF prog-id=153 op=UNLOAD Jan 14 01:00:51.701000 audit[3837]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.701000 audit: BPF prog-id=155 op=LOAD Jan 14 01:00:51.701000 audit[3837]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3800 pid=3837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3035393139353838336566336131666663666435636639326631313336 Jan 14 01:00:51.718818 kubelet[3360]: E0114 01:00:51.718791 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.718818 kubelet[3360]: W0114 01:00:51.718814 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.718961 kubelet[3360]: E0114 01:00:51.718835 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.719011 kubelet[3360]: E0114 01:00:51.718994 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.719047 kubelet[3360]: W0114 01:00:51.719005 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.719092 kubelet[3360]: E0114 01:00:51.719048 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.719324 kubelet[3360]: E0114 01:00:51.719310 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.719324 kubelet[3360]: W0114 01:00:51.719323 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.719398 kubelet[3360]: E0114 01:00:51.719333 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.719639 kubelet[3360]: E0114 01:00:51.719616 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.719639 kubelet[3360]: W0114 01:00:51.719635 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.719962 kubelet[3360]: E0114 01:00:51.719938 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.720214 kubelet[3360]: E0114 01:00:51.720191 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.720214 kubelet[3360]: W0114 01:00:51.720207 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.720287 kubelet[3360]: E0114 01:00:51.720219 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.720479 kubelet[3360]: E0114 01:00:51.720453 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.720479 kubelet[3360]: W0114 01:00:51.720468 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.720479 kubelet[3360]: E0114 01:00:51.720479 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.720957 kubelet[3360]: E0114 01:00:51.720882 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.720957 kubelet[3360]: W0114 01:00:51.720896 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.720957 kubelet[3360]: E0114 01:00:51.720908 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.721103 kubelet[3360]: E0114 01:00:51.721095 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.721143 kubelet[3360]: W0114 01:00:51.721104 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.721143 kubelet[3360]: E0114 01:00:51.721114 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.721361 kubelet[3360]: E0114 01:00:51.721314 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.721361 kubelet[3360]: W0114 01:00:51.721342 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.721361 kubelet[3360]: E0114 01:00:51.721354 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.721633 kubelet[3360]: E0114 01:00:51.721578 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.721633 kubelet[3360]: W0114 01:00:51.721588 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.721633 kubelet[3360]: E0114 01:00:51.721596 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.721885 kubelet[3360]: E0114 01:00:51.721831 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.721885 kubelet[3360]: W0114 01:00:51.721841 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.721885 kubelet[3360]: E0114 01:00:51.721850 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.721997 kubelet[3360]: E0114 01:00:51.721984 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.721997 kubelet[3360]: W0114 01:00:51.721994 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722157 kubelet[3360]: E0114 01:00:51.722002 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.722157 kubelet[3360]: E0114 01:00:51.722141 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.722157 kubelet[3360]: W0114 01:00:51.722150 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722157 kubelet[3360]: E0114 01:00:51.722159 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.722294 kubelet[3360]: E0114 01:00:51.722283 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.722294 kubelet[3360]: W0114 01:00:51.722293 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722355 kubelet[3360]: E0114 01:00:51.722302 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.722431 kubelet[3360]: E0114 01:00:51.722421 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.722431 kubelet[3360]: W0114 01:00:51.722431 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722482 kubelet[3360]: E0114 01:00:51.722438 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.722566 kubelet[3360]: E0114 01:00:51.722556 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.722566 kubelet[3360]: W0114 01:00:51.722565 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722624 kubelet[3360]: E0114 01:00:51.722573 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.722728 kubelet[3360]: E0114 01:00:51.722717 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.722754 kubelet[3360]: W0114 01:00:51.722730 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722754 kubelet[3360]: E0114 01:00:51.722739 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.722893 kubelet[3360]: E0114 01:00:51.722883 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.722893 kubelet[3360]: W0114 01:00:51.722893 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.722953 kubelet[3360]: E0114 01:00:51.722902 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.723059 kubelet[3360]: E0114 01:00:51.723020 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.723059 kubelet[3360]: W0114 01:00:51.723029 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.723059 kubelet[3360]: E0114 01:00:51.723036 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.723159 kubelet[3360]: E0114 01:00:51.723147 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.723159 kubelet[3360]: W0114 01:00:51.723157 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.723212 kubelet[3360]: E0114 01:00:51.723164 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.730276 containerd[1673]: time="2026-01-14T01:00:51.730242177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-547c47f45d-gkbkx,Uid:25b95cbc-2973-4f25-8999-17d1a9f08ac7,Namespace:calico-system,Attempt:0,} returns sandbox id \"059195883ef3a1ffcfd5cf92f1136c571e55c3c754dd0f5046f9a9de23e48cc7\"" Jan 14 01:00:51.731535 containerd[1673]: time="2026-01-14T01:00:51.731505381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 01:00:51.747416 kubelet[3360]: E0114 01:00:51.747358 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.747416 kubelet[3360]: W0114 01:00:51.747379 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.747416 kubelet[3360]: E0114 01:00:51.747395 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.747657 kubelet[3360]: I0114 01:00:51.747609 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2d5x\" (UniqueName: \"kubernetes.io/projected/156d3f01-7b19-463c-9dd8-133de12239c1-kube-api-access-t2d5x\") pod \"csi-node-driver-7cqlb\" (UID: \"156d3f01-7b19-463c-9dd8-133de12239c1\") " pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:00:51.747923 kubelet[3360]: E0114 01:00:51.747904 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.748044 kubelet[3360]: W0114 01:00:51.747981 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.748044 kubelet[3360]: E0114 01:00:51.747998 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.748044 kubelet[3360]: I0114 01:00:51.748025 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/156d3f01-7b19-463c-9dd8-133de12239c1-registration-dir\") pod \"csi-node-driver-7cqlb\" (UID: \"156d3f01-7b19-463c-9dd8-133de12239c1\") " pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:00:51.748281 kubelet[3360]: E0114 01:00:51.748263 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.748328 kubelet[3360]: W0114 01:00:51.748281 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.748328 kubelet[3360]: E0114 01:00:51.748297 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.748445 kubelet[3360]: E0114 01:00:51.748434 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.748445 kubelet[3360]: W0114 01:00:51.748445 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.748517 kubelet[3360]: E0114 01:00:51.748453 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.748644 kubelet[3360]: E0114 01:00:51.748612 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.748644 kubelet[3360]: W0114 01:00:51.748633 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.748644 kubelet[3360]: E0114 01:00:51.748645 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.748792 kubelet[3360]: I0114 01:00:51.748670 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/156d3f01-7b19-463c-9dd8-133de12239c1-kubelet-dir\") pod \"csi-node-driver-7cqlb\" (UID: \"156d3f01-7b19-463c-9dd8-133de12239c1\") " pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:00:51.749062 kubelet[3360]: E0114 01:00:51.748989 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.749062 kubelet[3360]: W0114 01:00:51.749003 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.749062 kubelet[3360]: E0114 01:00:51.749015 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.749388 kubelet[3360]: E0114 01:00:51.749311 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.749388 kubelet[3360]: W0114 01:00:51.749323 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.749388 kubelet[3360]: E0114 01:00:51.749333 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.749711 kubelet[3360]: E0114 01:00:51.749627 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.749711 kubelet[3360]: W0114 01:00:51.749639 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.749711 kubelet[3360]: E0114 01:00:51.749649 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.749711 kubelet[3360]: I0114 01:00:51.749675 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/156d3f01-7b19-463c-9dd8-133de12239c1-socket-dir\") pod \"csi-node-driver-7cqlb\" (UID: \"156d3f01-7b19-463c-9dd8-133de12239c1\") " pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:00:51.749898 kubelet[3360]: E0114 01:00:51.749875 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.749898 kubelet[3360]: W0114 01:00:51.749888 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.749938 kubelet[3360]: E0114 01:00:51.749900 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.750077 kubelet[3360]: E0114 01:00:51.750065 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.750077 kubelet[3360]: W0114 01:00:51.750076 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.750138 kubelet[3360]: E0114 01:00:51.750087 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.750223 kubelet[3360]: E0114 01:00:51.750213 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.750223 kubelet[3360]: W0114 01:00:51.750223 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.750273 kubelet[3360]: E0114 01:00:51.750232 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.750473 kubelet[3360]: E0114 01:00:51.750455 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.750518 kubelet[3360]: W0114 01:00:51.750473 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.750518 kubelet[3360]: E0114 01:00:51.750486 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.750715 kubelet[3360]: E0114 01:00:51.750700 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.750715 kubelet[3360]: W0114 01:00:51.750712 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.750766 kubelet[3360]: E0114 01:00:51.750721 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.750766 kubelet[3360]: I0114 01:00:51.750742 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/156d3f01-7b19-463c-9dd8-133de12239c1-varrun\") pod \"csi-node-driver-7cqlb\" (UID: \"156d3f01-7b19-463c-9dd8-133de12239c1\") " pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:00:51.750955 kubelet[3360]: E0114 01:00:51.750939 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.750995 kubelet[3360]: W0114 01:00:51.750951 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.750995 kubelet[3360]: E0114 01:00:51.750974 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.751128 kubelet[3360]: E0114 01:00:51.751105 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.751128 kubelet[3360]: W0114 01:00:51.751117 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.751128 kubelet[3360]: E0114 01:00:51.751125 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.805386 containerd[1673]: time="2026-01-14T01:00:51.804677605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gfzq,Uid:af38cc54-d5c6-44d9-8eb6-236354a82cae,Namespace:calico-system,Attempt:0,}" Jan 14 01:00:51.828220 containerd[1673]: time="2026-01-14T01:00:51.828160037Z" level=info msg="connecting to shim 841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8" address="unix:///run/containerd/s/1cd836dbd8fbaa88e45a65013eaf48034df7583225a9ded79f1513f288f2c16a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:00:51.852131 kubelet[3360]: E0114 01:00:51.852090 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.852131 kubelet[3360]: W0114 01:00:51.852115 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.852131 kubelet[3360]: E0114 01:00:51.852134 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.852334 kubelet[3360]: E0114 01:00:51.852313 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.852334 kubelet[3360]: W0114 01:00:51.852325 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.852334 kubelet[3360]: E0114 01:00:51.852335 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.852548 kubelet[3360]: E0114 01:00:51.852533 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.852548 kubelet[3360]: W0114 01:00:51.852547 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.852604 kubelet[3360]: E0114 01:00:51.852558 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.852907 kubelet[3360]: E0114 01:00:51.852878 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.852907 kubelet[3360]: W0114 01:00:51.852897 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.852974 kubelet[3360]: E0114 01:00:51.852912 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853083 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854092 kubelet[3360]: W0114 01:00:51.853095 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853104 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853250 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854092 kubelet[3360]: W0114 01:00:51.853259 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853267 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853463 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854092 kubelet[3360]: W0114 01:00:51.853472 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853481 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854092 kubelet[3360]: E0114 01:00:51.853700 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.853112 systemd[1]: Started cri-containerd-841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8.scope - libcontainer container 841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8. Jan 14 01:00:51.854420 kubelet[3360]: W0114 01:00:51.853710 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854420 kubelet[3360]: E0114 01:00:51.853719 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854420 kubelet[3360]: E0114 01:00:51.853882 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854420 kubelet[3360]: W0114 01:00:51.853890 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854420 kubelet[3360]: E0114 01:00:51.853899 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854420 kubelet[3360]: E0114 01:00:51.854228 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854420 kubelet[3360]: W0114 01:00:51.854239 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854420 kubelet[3360]: E0114 01:00:51.854250 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854784 kubelet[3360]: E0114 01:00:51.854717 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854784 kubelet[3360]: W0114 01:00:51.854735 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.854784 kubelet[3360]: E0114 01:00:51.854748 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.854949 kubelet[3360]: E0114 01:00:51.854934 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.854949 kubelet[3360]: W0114 01:00:51.854946 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.855018 kubelet[3360]: E0114 01:00:51.854957 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.855490 kubelet[3360]: E0114 01:00:51.855466 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.855490 kubelet[3360]: W0114 01:00:51.855482 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.855490 kubelet[3360]: E0114 01:00:51.855494 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.855954 kubelet[3360]: E0114 01:00:51.855665 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.855954 kubelet[3360]: W0114 01:00:51.855677 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.855954 kubelet[3360]: E0114 01:00:51.855685 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.855954 kubelet[3360]: E0114 01:00:51.855912 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.855954 kubelet[3360]: W0114 01:00:51.855925 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.855954 kubelet[3360]: E0114 01:00:51.855937 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.856224 kubelet[3360]: E0114 01:00:51.856203 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.856224 kubelet[3360]: W0114 01:00:51.856215 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.856224 kubelet[3360]: E0114 01:00:51.856225 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.856744 kubelet[3360]: E0114 01:00:51.856599 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.856744 kubelet[3360]: W0114 01:00:51.856727 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.856744 kubelet[3360]: E0114 01:00:51.856740 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.857432 kubelet[3360]: E0114 01:00:51.857233 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.857502 kubelet[3360]: W0114 01:00:51.857434 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.857502 kubelet[3360]: E0114 01:00:51.857450 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.858480 kubelet[3360]: E0114 01:00:51.858459 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.858480 kubelet[3360]: W0114 01:00:51.858475 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.858630 kubelet[3360]: E0114 01:00:51.858489 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.858715 kubelet[3360]: E0114 01:00:51.858699 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.858715 kubelet[3360]: W0114 01:00:51.858712 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.858773 kubelet[3360]: E0114 01:00:51.858723 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.858921 kubelet[3360]: E0114 01:00:51.858858 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.858921 kubelet[3360]: W0114 01:00:51.858870 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.858921 kubelet[3360]: E0114 01:00:51.858879 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.859182 kubelet[3360]: E0114 01:00:51.859078 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.859182 kubelet[3360]: W0114 01:00:51.859091 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.859182 kubelet[3360]: E0114 01:00:51.859102 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.859426 kubelet[3360]: E0114 01:00:51.859328 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.859426 kubelet[3360]: W0114 01:00:51.859338 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.859426 kubelet[3360]: E0114 01:00:51.859347 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.859661 kubelet[3360]: E0114 01:00:51.859644 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.859927 kubelet[3360]: W0114 01:00:51.859669 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.859927 kubelet[3360]: E0114 01:00:51.859680 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.859927 kubelet[3360]: E0114 01:00:51.859918 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.859927 kubelet[3360]: W0114 01:00:51.859927 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.860045 kubelet[3360]: E0114 01:00:51.859937 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.865000 audit: BPF prog-id=156 op=LOAD Jan 14 01:00:51.866000 audit: BPF prog-id=157 op=LOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.866000 audit: BPF prog-id=157 op=UNLOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.866000 audit: BPF prog-id=158 op=LOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.866000 audit: BPF prog-id=159 op=LOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.866000 audit: BPF prog-id=159 op=UNLOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.866000 audit: BPF prog-id=158 op=UNLOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.866000 audit: BPF prog-id=160 op=LOAD Jan 14 01:00:51.866000 audit[3945]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3934 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:51.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3834313632346237363239656237653737353237613835616336313636 Jan 14 01:00:51.874412 kubelet[3360]: E0114 01:00:51.874233 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:51.874412 kubelet[3360]: W0114 01:00:51.874254 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:51.874412 kubelet[3360]: E0114 01:00:51.874273 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:51.882860 containerd[1673]: time="2026-01-14T01:00:51.882825884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2gfzq,Uid:af38cc54-d5c6-44d9-8eb6-236354a82cae,Namespace:calico-system,Attempt:0,} returns sandbox id \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\"" Jan 14 01:00:52.303000 audit[3998]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:52.303000 audit[3998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe38cfbd0 a2=0 a3=1 items=0 ppid=3494 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:52.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:52.313000 audit[3998]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3998 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:00:52.313000 audit[3998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe38cfbd0 a2=0 a3=1 items=0 ppid=3494 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:52.313000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:00:53.217762 kubelet[3360]: E0114 01:00:53.217711 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:00:53.267857 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946615659.mount: Deactivated successfully. Jan 14 01:00:54.254934 containerd[1673]: time="2026-01-14T01:00:54.254879152Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:54.257252 containerd[1673]: time="2026-01-14T01:00:54.256838878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 14 01:00:54.259731 containerd[1673]: time="2026-01-14T01:00:54.259081165Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:54.261336 containerd[1673]: time="2026-01-14T01:00:54.261279532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:54.262369 containerd[1673]: time="2026-01-14T01:00:54.262331615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.530799874s" Jan 14 01:00:54.262369 containerd[1673]: time="2026-01-14T01:00:54.262366855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 01:00:54.264521 containerd[1673]: time="2026-01-14T01:00:54.264483302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 01:00:54.276451 containerd[1673]: time="2026-01-14T01:00:54.276410178Z" level=info msg="CreateContainer within sandbox \"059195883ef3a1ffcfd5cf92f1136c571e55c3c754dd0f5046f9a9de23e48cc7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 01:00:54.289911 containerd[1673]: time="2026-01-14T01:00:54.288853456Z" level=info msg="Container 8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:54.297377 containerd[1673]: time="2026-01-14T01:00:54.297331762Z" level=info msg="CreateContainer within sandbox \"059195883ef3a1ffcfd5cf92f1136c571e55c3c754dd0f5046f9a9de23e48cc7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7\"" Jan 14 01:00:54.298011 containerd[1673]: time="2026-01-14T01:00:54.297981804Z" level=info msg="StartContainer for \"8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7\"" Jan 14 01:00:54.299134 containerd[1673]: time="2026-01-14T01:00:54.299101688Z" level=info msg="connecting to shim 8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7" address="unix:///run/containerd/s/b538e4a1ceb0990509e3d3affc5953da81366d214d85db9254b7be11b98418d0" protocol=ttrpc version=3 Jan 14 01:00:54.326115 systemd[1]: Started cri-containerd-8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7.scope - libcontainer container 8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7. Jan 14 01:00:54.336000 audit: BPF prog-id=161 op=LOAD Jan 14 01:00:54.336000 audit: BPF prog-id=162 op=LOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.336000 audit: BPF prog-id=162 op=UNLOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.336000 audit: BPF prog-id=163 op=LOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.336000 audit: BPF prog-id=164 op=LOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.336000 audit: BPF prog-id=164 op=UNLOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.336000 audit: BPF prog-id=163 op=UNLOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.336000 audit: BPF prog-id=165 op=LOAD Jan 14 01:00:54.336000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3800 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:54.336000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862353165323233303066316662623961636338313365663839656435 Jan 14 01:00:54.364409 containerd[1673]: time="2026-01-14T01:00:54.364364928Z" level=info msg="StartContainer for \"8b51e22300f1fbb9acc813ef89ed5a5182e1b655ecd080359070bbe1c4e746e7\" returns successfully" Jan 14 01:00:55.217651 kubelet[3360]: E0114 01:00:55.217478 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:00:55.348006 kubelet[3360]: E0114 01:00:55.347957 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348006 kubelet[3360]: W0114 01:00:55.347990 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.348006 kubelet[3360]: E0114 01:00:55.348008 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.348219 kubelet[3360]: E0114 01:00:55.348193 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348347 kubelet[3360]: W0114 01:00:55.348209 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.348347 kubelet[3360]: E0114 01:00:55.348249 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.348462 kubelet[3360]: E0114 01:00:55.348445 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348462 kubelet[3360]: W0114 01:00:55.348453 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.348462 kubelet[3360]: E0114 01:00:55.348460 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.348637 kubelet[3360]: E0114 01:00:55.348586 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348637 kubelet[3360]: W0114 01:00:55.348598 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.348637 kubelet[3360]: E0114 01:00:55.348615 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.348863 kubelet[3360]: E0114 01:00:55.348822 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348863 kubelet[3360]: W0114 01:00:55.348843 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.348863 kubelet[3360]: E0114 01:00:55.348857 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.348992 kubelet[3360]: E0114 01:00:55.348979 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348992 kubelet[3360]: W0114 01:00:55.348990 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.348992 kubelet[3360]: E0114 01:00:55.349006 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.348992 kubelet[3360]: E0114 01:00:55.349140 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.348992 kubelet[3360]: W0114 01:00:55.349149 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.349295 kubelet[3360]: E0114 01:00:55.349176 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.349320 kubelet[3360]: E0114 01:00:55.349306 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.349320 kubelet[3360]: W0114 01:00:55.349314 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.349362 kubelet[3360]: E0114 01:00:55.349331 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.349488 kubelet[3360]: E0114 01:00:55.349471 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.349513 kubelet[3360]: W0114 01:00:55.349488 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.349513 kubelet[3360]: E0114 01:00:55.349496 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.349620 kubelet[3360]: E0114 01:00:55.349609 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.349620 kubelet[3360]: W0114 01:00:55.349618 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.349666 kubelet[3360]: E0114 01:00:55.349626 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.349806 kubelet[3360]: E0114 01:00:55.349786 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.349829 kubelet[3360]: W0114 01:00:55.349805 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.349829 kubelet[3360]: E0114 01:00:55.349814 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.349941 kubelet[3360]: E0114 01:00:55.349930 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.349941 kubelet[3360]: W0114 01:00:55.349940 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.349985 kubelet[3360]: E0114 01:00:55.349948 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.350096 kubelet[3360]: E0114 01:00:55.350086 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.350118 kubelet[3360]: W0114 01:00:55.350096 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.350118 kubelet[3360]: E0114 01:00:55.350105 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.350233 kubelet[3360]: E0114 01:00:55.350223 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.350266 kubelet[3360]: W0114 01:00:55.350233 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.350266 kubelet[3360]: E0114 01:00:55.350240 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.350388 kubelet[3360]: E0114 01:00:55.350373 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.350388 kubelet[3360]: W0114 01:00:55.350384 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.350446 kubelet[3360]: E0114 01:00:55.350405 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.379163 kubelet[3360]: E0114 01:00:55.379130 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.379163 kubelet[3360]: W0114 01:00:55.379153 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.379163 kubelet[3360]: E0114 01:00:55.379167 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.379481 kubelet[3360]: E0114 01:00:55.379449 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.379481 kubelet[3360]: W0114 01:00:55.379463 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.379481 kubelet[3360]: E0114 01:00:55.379474 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.379758 kubelet[3360]: E0114 01:00:55.379738 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.379791 kubelet[3360]: W0114 01:00:55.379760 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.379791 kubelet[3360]: E0114 01:00:55.379773 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.379951 kubelet[3360]: E0114 01:00:55.379937 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.379974 kubelet[3360]: W0114 01:00:55.379950 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.379974 kubelet[3360]: E0114 01:00:55.379959 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.380109 kubelet[3360]: E0114 01:00:55.380098 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.380173 kubelet[3360]: W0114 01:00:55.380109 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.380173 kubelet[3360]: E0114 01:00:55.380117 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.380298 kubelet[3360]: E0114 01:00:55.380286 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.380298 kubelet[3360]: W0114 01:00:55.380296 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.380340 kubelet[3360]: E0114 01:00:55.380305 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.380557 kubelet[3360]: E0114 01:00:55.380519 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.380557 kubelet[3360]: W0114 01:00:55.380542 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.380557 kubelet[3360]: E0114 01:00:55.380551 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.380863 kubelet[3360]: E0114 01:00:55.380849 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.380863 kubelet[3360]: W0114 01:00:55.380861 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.380922 kubelet[3360]: E0114 01:00:55.380870 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.381071 kubelet[3360]: E0114 01:00:55.381057 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.381098 kubelet[3360]: W0114 01:00:55.381070 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.381098 kubelet[3360]: E0114 01:00:55.381083 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.381248 kubelet[3360]: E0114 01:00:55.381236 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.381273 kubelet[3360]: W0114 01:00:55.381247 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.381273 kubelet[3360]: E0114 01:00:55.381255 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.381420 kubelet[3360]: E0114 01:00:55.381408 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.381420 kubelet[3360]: W0114 01:00:55.381419 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.381468 kubelet[3360]: E0114 01:00:55.381427 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.381557 kubelet[3360]: E0114 01:00:55.381547 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.381587 kubelet[3360]: W0114 01:00:55.381577 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.381609 kubelet[3360]: E0114 01:00:55.381589 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.381752 kubelet[3360]: E0114 01:00:55.381740 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.381782 kubelet[3360]: W0114 01:00:55.381751 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.381782 kubelet[3360]: E0114 01:00:55.381760 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.381979 kubelet[3360]: E0114 01:00:55.381967 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.382002 kubelet[3360]: W0114 01:00:55.381979 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.382002 kubelet[3360]: E0114 01:00:55.381988 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.382182 kubelet[3360]: E0114 01:00:55.382172 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.382208 kubelet[3360]: W0114 01:00:55.382181 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.382208 kubelet[3360]: E0114 01:00:55.382189 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.382373 kubelet[3360]: E0114 01:00:55.382361 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.382395 kubelet[3360]: W0114 01:00:55.382373 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.382395 kubelet[3360]: E0114 01:00:55.382381 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.382624 kubelet[3360]: E0114 01:00:55.382608 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.382650 kubelet[3360]: W0114 01:00:55.382625 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.382650 kubelet[3360]: E0114 01:00:55.382637 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.382883 kubelet[3360]: E0114 01:00:55.382868 3360 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 01:00:55.382906 kubelet[3360]: W0114 01:00:55.382884 3360 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 01:00:55.382906 kubelet[3360]: E0114 01:00:55.382895 3360 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 01:00:55.896373 containerd[1673]: time="2026-01-14T01:00:55.896295742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:55.897991 containerd[1673]: time="2026-01-14T01:00:55.897938107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Jan 14 01:00:55.899547 containerd[1673]: time="2026-01-14T01:00:55.899510391Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:55.901911 containerd[1673]: time="2026-01-14T01:00:55.901870879Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:00:55.902477 containerd[1673]: time="2026-01-14T01:00:55.902438640Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.637894818s" Jan 14 01:00:55.902477 containerd[1673]: time="2026-01-14T01:00:55.902472320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 01:00:55.907780 containerd[1673]: time="2026-01-14T01:00:55.907747497Z" level=info msg="CreateContainer within sandbox \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 01:00:55.918957 containerd[1673]: time="2026-01-14T01:00:55.918898491Z" level=info msg="Container 3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:00:55.928083 containerd[1673]: time="2026-01-14T01:00:55.928038639Z" level=info msg="CreateContainer within sandbox \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb\"" Jan 14 01:00:55.929849 containerd[1673]: time="2026-01-14T01:00:55.929813324Z" level=info msg="StartContainer for \"3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb\"" Jan 14 01:00:55.931585 containerd[1673]: time="2026-01-14T01:00:55.931549370Z" level=info msg="connecting to shim 3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb" address="unix:///run/containerd/s/1cd836dbd8fbaa88e45a65013eaf48034df7583225a9ded79f1513f288f2c16a" protocol=ttrpc version=3 Jan 14 01:00:55.949874 systemd[1]: Started cri-containerd-3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb.scope - libcontainer container 3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb. Jan 14 01:00:56.015000 audit: BPF prog-id=166 op=LOAD Jan 14 01:00:56.016812 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 14 01:00:56.016911 kernel: audit: type=1334 audit(1768352456.015:561): prog-id=166 op=LOAD Jan 14 01:00:56.015000 audit[4084]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.020724 kernel: audit: type=1300 audit(1768352456.015:561): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.020778 kernel: audit: type=1327 audit(1768352456.015:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.015000 audit: BPF prog-id=167 op=LOAD Jan 14 01:00:56.015000 audit[4084]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.027788 kernel: audit: type=1334 audit(1768352456.015:562): prog-id=167 op=LOAD Jan 14 01:00:56.027850 kernel: audit: type=1300 audit(1768352456.015:562): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.030934 kernel: audit: type=1327 audit(1768352456.015:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.031035 kernel: audit: type=1334 audit(1768352456.016:563): prog-id=167 op=UNLOAD Jan 14 01:00:56.016000 audit: BPF prog-id=167 op=UNLOAD Jan 14 01:00:56.016000 audit[4084]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.034774 kernel: audit: type=1300 audit(1768352456.016:563): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.034902 kernel: audit: type=1327 audit(1768352456.016:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.037939 kernel: audit: type=1334 audit(1768352456.016:564): prog-id=166 op=UNLOAD Jan 14 01:00:56.016000 audit: BPF prog-id=166 op=UNLOAD Jan 14 01:00:56.016000 audit[4084]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.016000 audit: BPF prog-id=168 op=LOAD Jan 14 01:00:56.016000 audit[4084]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3934 pid=4084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:00:56.016000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361343930393865626439613537623933363239303536653531366537 Jan 14 01:00:56.054104 containerd[1673]: time="2026-01-14T01:00:56.054062345Z" level=info msg="StartContainer for \"3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb\" returns successfully" Jan 14 01:00:56.062653 systemd[1]: cri-containerd-3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb.scope: Deactivated successfully. Jan 14 01:00:56.065420 containerd[1673]: time="2026-01-14T01:00:56.065286579Z" level=info msg="received container exit event container_id:\"3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb\" id:\"3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb\" pid:4097 exited_at:{seconds:1768352456 nanos:64674777}" Jan 14 01:00:56.068000 audit: BPF prog-id=168 op=UNLOAD Jan 14 01:00:56.087051 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a49098ebd9a57b93629056e516e7db47f9f1af05c0abe9925ddfb6f15f066cb-rootfs.mount: Deactivated successfully. Jan 14 01:00:56.580665 kubelet[3360]: I0114 01:00:56.291782 3360 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:00:56.580665 kubelet[3360]: I0114 01:00:56.306231 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-547c47f45d-gkbkx" podStartSLOduration=2.774101359 podStartE2EDuration="5.306214277s" podCreationTimestamp="2026-01-14 01:00:51 +0000 UTC" firstStartedPulling="2026-01-14 01:00:51.7312661 +0000 UTC m=+23.601395955" lastFinishedPulling="2026-01-14 01:00:54.263379018 +0000 UTC m=+26.133508873" observedRunningTime="2026-01-14 01:00:55.297159986 +0000 UTC m=+27.167289841" watchObservedRunningTime="2026-01-14 01:00:56.306214277 +0000 UTC m=+28.176344092" Jan 14 01:00:57.217920 kubelet[3360]: E0114 01:00:57.217869 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:00:59.218342 kubelet[3360]: E0114 01:00:59.218022 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:00:59.300554 containerd[1673]: time="2026-01-14T01:00:59.300501252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 01:01:01.217203 kubelet[3360]: E0114 01:01:01.217109 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:02.566239 containerd[1673]: time="2026-01-14T01:01:02.566183498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:01:02.567421 containerd[1673]: time="2026-01-14T01:01:02.567374342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 01:01:02.569109 containerd[1673]: time="2026-01-14T01:01:02.569070027Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:01:02.571652 containerd[1673]: time="2026-01-14T01:01:02.571593434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:01:02.572349 containerd[1673]: time="2026-01-14T01:01:02.572235076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.270950942s" Jan 14 01:01:02.572349 containerd[1673]: time="2026-01-14T01:01:02.572270597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 01:01:02.577133 containerd[1673]: time="2026-01-14T01:01:02.577081251Z" level=info msg="CreateContainer within sandbox \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 01:01:02.587933 containerd[1673]: time="2026-01-14T01:01:02.587828684Z" level=info msg="Container fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:01:02.596795 containerd[1673]: time="2026-01-14T01:01:02.596758512Z" level=info msg="CreateContainer within sandbox \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934\"" Jan 14 01:01:02.597491 containerd[1673]: time="2026-01-14T01:01:02.597461834Z" level=info msg="StartContainer for \"fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934\"" Jan 14 01:01:02.599173 containerd[1673]: time="2026-01-14T01:01:02.599121239Z" level=info msg="connecting to shim fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934" address="unix:///run/containerd/s/1cd836dbd8fbaa88e45a65013eaf48034df7583225a9ded79f1513f288f2c16a" protocol=ttrpc version=3 Jan 14 01:01:02.618893 systemd[1]: Started cri-containerd-fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934.scope - libcontainer container fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934. Jan 14 01:01:02.672000 audit: BPF prog-id=169 op=LOAD Jan 14 01:01:02.674159 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:01:02.674208 kernel: audit: type=1334 audit(1768352462.672:567): prog-id=169 op=LOAD Jan 14 01:01:02.672000 audit[4144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.678146 kernel: audit: type=1300 audit(1768352462.672:567): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.678205 kernel: audit: type=1327 audit(1768352462.672:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.672000 audit: BPF prog-id=170 op=LOAD Jan 14 01:01:02.682074 kernel: audit: type=1334 audit(1768352462.672:568): prog-id=170 op=LOAD Jan 14 01:01:02.682103 kernel: audit: type=1300 audit(1768352462.672:568): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.672000 audit[4144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.672000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.688398 kernel: audit: type=1327 audit(1768352462.672:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.673000 audit: BPF prog-id=170 op=UNLOAD Jan 14 01:01:02.689504 kernel: audit: type=1334 audit(1768352462.673:569): prog-id=170 op=UNLOAD Jan 14 01:01:02.673000 audit[4144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.692658 kernel: audit: type=1300 audit(1768352462.673:569): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.696227 kernel: audit: type=1327 audit(1768352462.673:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.673000 audit: BPF prog-id=169 op=UNLOAD Jan 14 01:01:02.697201 kernel: audit: type=1334 audit(1768352462.673:570): prog-id=169 op=UNLOAD Jan 14 01:01:02.673000 audit[4144]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.673000 audit: BPF prog-id=171 op=LOAD Jan 14 01:01:02.673000 audit[4144]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3934 pid=4144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:02.673000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661386531616634336332363066646262646165333139363130303161 Jan 14 01:01:02.711855 containerd[1673]: time="2026-01-14T01:01:02.711760104Z" level=info msg="StartContainer for \"fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934\" returns successfully" Jan 14 01:01:03.217345 kubelet[3360]: E0114 01:01:03.217295 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:03.972430 systemd[1]: cri-containerd-fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934.scope: Deactivated successfully. Jan 14 01:01:03.972786 systemd[1]: cri-containerd-fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934.scope: Consumed 465ms CPU time, 187.3M memory peak, 165.9M written to disk. Jan 14 01:01:03.974722 containerd[1673]: time="2026-01-14T01:01:03.974660653Z" level=info msg="received container exit event container_id:\"fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934\" id:\"fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934\" pid:4157 exited_at:{seconds:1768352463 nanos:974467893}" Jan 14 01:01:03.977000 audit: BPF prog-id=171 op=UNLOAD Jan 14 01:01:03.980562 kubelet[3360]: I0114 01:01:03.979789 3360 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 14 01:01:04.000089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa8e1af43c260fdbbdae31961001af745b5872c6182521de0a59806d0cdf2934-rootfs.mount: Deactivated successfully. Jan 14 01:01:04.256275 systemd[1]: Created slice kubepods-burstable-pod87bb732d_d584_4746_964a_074ac60813c7.slice - libcontainer container kubepods-burstable-pod87bb732d_d584_4746_964a_074ac60813c7.slice. Jan 14 01:01:04.342125 kubelet[3360]: I0114 01:01:04.341949 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhh2\" (UniqueName: \"kubernetes.io/projected/87bb732d-d584-4746-964a-074ac60813c7-kube-api-access-klhh2\") pod \"coredns-674b8bbfcf-cn9rq\" (UID: \"87bb732d-d584-4746-964a-074ac60813c7\") " pod="kube-system/coredns-674b8bbfcf-cn9rq" Jan 14 01:01:04.509023 kubelet[3360]: I0114 01:01:04.342159 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87bb732d-d584-4746-964a-074ac60813c7-config-volume\") pod \"coredns-674b8bbfcf-cn9rq\" (UID: \"87bb732d-d584-4746-964a-074ac60813c7\") " pod="kube-system/coredns-674b8bbfcf-cn9rq" Jan 14 01:01:04.559663 containerd[1673]: time="2026-01-14T01:01:04.559476005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn9rq,Uid:87bb732d-d584-4746-964a-074ac60813c7,Namespace:kube-system,Attempt:0,}" Jan 14 01:01:05.441597 systemd[1]: Created slice kubepods-besteffort-pod9d6eeb8e_19e2_43b0_aa84_76d3b0583005.slice - libcontainer container kubepods-besteffort-pod9d6eeb8e_19e2_43b0_aa84_76d3b0583005.slice. Jan 14 01:01:05.450265 systemd[1]: Created slice kubepods-besteffort-pod156d3f01_7b19_463c_9dd8_133de12239c1.slice - libcontainer container kubepods-besteffort-pod156d3f01_7b19_463c_9dd8_133de12239c1.slice. Jan 14 01:01:05.453139 containerd[1673]: time="2026-01-14T01:01:05.452951143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqlb,Uid:156d3f01-7b19-463c-9dd8-133de12239c1,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:05.481061 systemd[1]: Created slice kubepods-besteffort-podb8e2c676_74d2_4fbb_b79b_4d5fa6599826.slice - libcontainer container kubepods-besteffort-podb8e2c676_74d2_4fbb_b79b_4d5fa6599826.slice. Jan 14 01:01:05.494290 systemd[1]: Created slice kubepods-burstable-pod3c6f06cc_5927_444c_9eab_65c1de1f4fb8.slice - libcontainer container kubepods-burstable-pod3c6f06cc_5927_444c_9eab_65c1de1f4fb8.slice. Jan 14 01:01:05.504050 systemd[1]: Created slice kubepods-besteffort-podaa2a7902_d245_486d_bc45_29c607d6f794.slice - libcontainer container kubepods-besteffort-podaa2a7902_d245_486d_bc45_29c607d6f794.slice. Jan 14 01:01:05.510198 systemd[1]: Created slice kubepods-besteffort-pod19f24365_565a_4989_ac4f_5964349e35e5.slice - libcontainer container kubepods-besteffort-pod19f24365_565a_4989_ac4f_5964349e35e5.slice. Jan 14 01:01:05.518360 systemd[1]: Created slice kubepods-besteffort-pod3fa9e714_815e_4dce_822e_1a57156e24fa.slice - libcontainer container kubepods-besteffort-pod3fa9e714_815e_4dce_822e_1a57156e24fa.slice. Jan 14 01:01:05.542969 containerd[1673]: time="2026-01-14T01:01:05.542908338Z" level=error msg="Failed to destroy network for sandbox \"ff05ffbb34abfc1f4a6238ab0f774bcb711e10b47bffd3b477115e380b5b9f3a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.543381 containerd[1673]: time="2026-01-14T01:01:05.543248820Z" level=error msg="Failed to destroy network for sandbox \"e53b18f9ce0da50b4be3216cf0a9647e39d494d6b7190bb77d9738fb984239a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.544988 systemd[1]: run-netns-cni\x2dadc907eb\x2d2e98\x2de29d\x2dfcbd\x2d6d039291f363.mount: Deactivated successfully. Jan 14 01:01:05.545084 systemd[1]: run-netns-cni\x2d2ece4b3b\x2d55b8\x2df10a\x2d44bf\x2d5005ad6b966b.mount: Deactivated successfully. Jan 14 01:01:05.546673 containerd[1673]: time="2026-01-14T01:01:05.546608550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqlb,Uid:156d3f01-7b19-463c-9dd8-133de12239c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53b18f9ce0da50b4be3216cf0a9647e39d494d6b7190bb77d9738fb984239a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.548075 kubelet[3360]: E0114 01:01:05.547983 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53b18f9ce0da50b4be3216cf0a9647e39d494d6b7190bb77d9738fb984239a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.549243 kubelet[3360]: E0114 01:01:05.548103 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53b18f9ce0da50b4be3216cf0a9647e39d494d6b7190bb77d9738fb984239a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:01:05.549243 kubelet[3360]: E0114 01:01:05.548124 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e53b18f9ce0da50b4be3216cf0a9647e39d494d6b7190bb77d9738fb984239a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7cqlb" Jan 14 01:01:05.549243 kubelet[3360]: E0114 01:01:05.548181 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e53b18f9ce0da50b4be3216cf0a9647e39d494d6b7190bb77d9738fb984239a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:05.549412 containerd[1673]: time="2026-01-14T01:01:05.548400075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn9rq,Uid:87bb732d-d584-4746-964a-074ac60813c7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff05ffbb34abfc1f4a6238ab0f774bcb711e10b47bffd3b477115e380b5b9f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.549457 kubelet[3360]: E0114 01:01:05.548567 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff05ffbb34abfc1f4a6238ab0f774bcb711e10b47bffd3b477115e380b5b9f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.549457 kubelet[3360]: E0114 01:01:05.548597 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff05ffbb34abfc1f4a6238ab0f774bcb711e10b47bffd3b477115e380b5b9f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cn9rq" Jan 14 01:01:05.549457 kubelet[3360]: E0114 01:01:05.548613 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff05ffbb34abfc1f4a6238ab0f774bcb711e10b47bffd3b477115e380b5b9f3a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cn9rq" Jan 14 01:01:05.549527 kubelet[3360]: E0114 01:01:05.548719 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cn9rq_kube-system(87bb732d-d584-4746-964a-074ac60813c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cn9rq_kube-system(87bb732d-d584-4746-964a-074ac60813c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff05ffbb34abfc1f4a6238ab0f774bcb711e10b47bffd3b477115e380b5b9f3a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cn9rq" podUID="87bb732d-d584-4746-964a-074ac60813c7" Jan 14 01:01:05.552569 kubelet[3360]: I0114 01:01:05.551836 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8rh\" (UniqueName: \"kubernetes.io/projected/9d6eeb8e-19e2-43b0-aa84-76d3b0583005-kube-api-access-mh8rh\") pod \"calico-kube-controllers-645b4cd96c-cs27s\" (UID: \"9d6eeb8e-19e2-43b0-aa84-76d3b0583005\") " pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" Jan 14 01:01:05.552569 kubelet[3360]: I0114 01:01:05.551875 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8e2c676-74d2-4fbb-b79b-4d5fa6599826-calico-apiserver-certs\") pod \"calico-apiserver-9bc96f9cf-lvk8t\" (UID: \"b8e2c676-74d2-4fbb-b79b-4d5fa6599826\") " pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" Jan 14 01:01:05.552569 kubelet[3360]: I0114 01:01:05.551902 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c6f06cc-5927-444c-9eab-65c1de1f4fb8-config-volume\") pod \"coredns-674b8bbfcf-rtnc2\" (UID: \"3c6f06cc-5927-444c-9eab-65c1de1f4fb8\") " pod="kube-system/coredns-674b8bbfcf-rtnc2" Jan 14 01:01:05.552569 kubelet[3360]: I0114 01:01:05.551918 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fa9e714-815e-4dce-822e-1a57156e24fa-goldmane-ca-bundle\") pod \"goldmane-666569f655-5456h\" (UID: \"3fa9e714-815e-4dce-822e-1a57156e24fa\") " pod="calico-system/goldmane-666569f655-5456h" Jan 14 01:01:05.552569 kubelet[3360]: I0114 01:01:05.551937 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gshth\" (UniqueName: \"kubernetes.io/projected/b8e2c676-74d2-4fbb-b79b-4d5fa6599826-kube-api-access-gshth\") pod \"calico-apiserver-9bc96f9cf-lvk8t\" (UID: \"b8e2c676-74d2-4fbb-b79b-4d5fa6599826\") " pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" Jan 14 01:01:05.552797 kubelet[3360]: I0114 01:01:05.551958 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-backend-key-pair\") pod \"whisker-94c6f8879-n2nnd\" (UID: \"aa2a7902-d245-486d-bc45-29c607d6f794\") " pod="calico-system/whisker-94c6f8879-n2nnd" Jan 14 01:01:05.552797 kubelet[3360]: I0114 01:01:05.551975 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwslj\" (UniqueName: \"kubernetes.io/projected/aa2a7902-d245-486d-bc45-29c607d6f794-kube-api-access-hwslj\") pod \"whisker-94c6f8879-n2nnd\" (UID: \"aa2a7902-d245-486d-bc45-29c607d6f794\") " pod="calico-system/whisker-94c6f8879-n2nnd" Jan 14 01:01:05.552797 kubelet[3360]: I0114 01:01:05.552010 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdv76\" (UniqueName: \"kubernetes.io/projected/19f24365-565a-4989-ac4f-5964349e35e5-kube-api-access-sdv76\") pod \"calico-apiserver-9bc96f9cf-6x9dt\" (UID: \"19f24365-565a-4989-ac4f-5964349e35e5\") " pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" Jan 14 01:01:05.552797 kubelet[3360]: I0114 01:01:05.552031 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3fa9e714-815e-4dce-822e-1a57156e24fa-config\") pod \"goldmane-666569f655-5456h\" (UID: \"3fa9e714-815e-4dce-822e-1a57156e24fa\") " pod="calico-system/goldmane-666569f655-5456h" Jan 14 01:01:05.552797 kubelet[3360]: I0114 01:01:05.552045 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jsb\" (UniqueName: \"kubernetes.io/projected/3fa9e714-815e-4dce-822e-1a57156e24fa-kube-api-access-98jsb\") pod \"goldmane-666569f655-5456h\" (UID: \"3fa9e714-815e-4dce-822e-1a57156e24fa\") " pod="calico-system/goldmane-666569f655-5456h" Jan 14 01:01:05.552905 kubelet[3360]: I0114 01:01:05.552103 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4fcm\" (UniqueName: \"kubernetes.io/projected/3c6f06cc-5927-444c-9eab-65c1de1f4fb8-kube-api-access-q4fcm\") pod \"coredns-674b8bbfcf-rtnc2\" (UID: \"3c6f06cc-5927-444c-9eab-65c1de1f4fb8\") " pod="kube-system/coredns-674b8bbfcf-rtnc2" Jan 14 01:01:05.552905 kubelet[3360]: I0114 01:01:05.552174 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d6eeb8e-19e2-43b0-aa84-76d3b0583005-tigera-ca-bundle\") pod \"calico-kube-controllers-645b4cd96c-cs27s\" (UID: \"9d6eeb8e-19e2-43b0-aa84-76d3b0583005\") " pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" Jan 14 01:01:05.552905 kubelet[3360]: I0114 01:01:05.552195 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-ca-bundle\") pod \"whisker-94c6f8879-n2nnd\" (UID: \"aa2a7902-d245-486d-bc45-29c607d6f794\") " pod="calico-system/whisker-94c6f8879-n2nnd" Jan 14 01:01:05.552905 kubelet[3360]: I0114 01:01:05.552221 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/19f24365-565a-4989-ac4f-5964349e35e5-calico-apiserver-certs\") pod \"calico-apiserver-9bc96f9cf-6x9dt\" (UID: \"19f24365-565a-4989-ac4f-5964349e35e5\") " pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" Jan 14 01:01:05.552905 kubelet[3360]: I0114 01:01:05.552507 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3fa9e714-815e-4dce-822e-1a57156e24fa-goldmane-key-pair\") pod \"goldmane-666569f655-5456h\" (UID: \"3fa9e714-815e-4dce-822e-1a57156e24fa\") " pod="calico-system/goldmane-666569f655-5456h" Jan 14 01:01:05.747499 containerd[1673]: time="2026-01-14T01:01:05.747419965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4cd96c-cs27s,Uid:9d6eeb8e-19e2-43b0-aa84-76d3b0583005,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:05.787989 containerd[1673]: time="2026-01-14T01:01:05.787779809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-lvk8t,Uid:b8e2c676-74d2-4fbb-b79b-4d5fa6599826,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:01:05.789742 containerd[1673]: time="2026-01-14T01:01:05.789705575Z" level=error msg="Failed to destroy network for sandbox \"92c9e964e8dc957f869a5a85c84cc1b15dfa6e819a4f77caa2ca69c6e64db850\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.792136 containerd[1673]: time="2026-01-14T01:01:05.792101302Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4cd96c-cs27s,Uid:9d6eeb8e-19e2-43b0-aa84-76d3b0583005,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9e964e8dc957f869a5a85c84cc1b15dfa6e819a4f77caa2ca69c6e64db850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.792374 kubelet[3360]: E0114 01:01:05.792336 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9e964e8dc957f869a5a85c84cc1b15dfa6e819a4f77caa2ca69c6e64db850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.792427 kubelet[3360]: E0114 01:01:05.792396 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9e964e8dc957f869a5a85c84cc1b15dfa6e819a4f77caa2ca69c6e64db850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" Jan 14 01:01:05.792427 kubelet[3360]: E0114 01:01:05.792418 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c9e964e8dc957f869a5a85c84cc1b15dfa6e819a4f77caa2ca69c6e64db850\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" Jan 14 01:01:05.792753 kubelet[3360]: E0114 01:01:05.792468 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c9e964e8dc957f869a5a85c84cc1b15dfa6e819a4f77caa2ca69c6e64db850\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:01:05.799952 containerd[1673]: time="2026-01-14T01:01:05.799909726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtnc2,Uid:3c6f06cc-5927-444c-9eab-65c1de1f4fb8,Namespace:kube-system,Attempt:0,}" Jan 14 01:01:05.806889 containerd[1673]: time="2026-01-14T01:01:05.806853387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94c6f8879-n2nnd,Uid:aa2a7902-d245-486d-bc45-29c607d6f794,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:05.818304 containerd[1673]: time="2026-01-14T01:01:05.818238342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-6x9dt,Uid:19f24365-565a-4989-ac4f-5964349e35e5,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:01:05.824265 containerd[1673]: time="2026-01-14T01:01:05.824215760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5456h,Uid:3fa9e714-815e-4dce-822e-1a57156e24fa,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:05.852208 containerd[1673]: time="2026-01-14T01:01:05.852146326Z" level=error msg="Failed to destroy network for sandbox \"1b036cd80ca330436a851b7ee58aa456f3fad0c38507df22cf98438074bf10ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.855051 containerd[1673]: time="2026-01-14T01:01:05.855006575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-lvk8t,Uid:b8e2c676-74d2-4fbb-b79b-4d5fa6599826,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b036cd80ca330436a851b7ee58aa456f3fad0c38507df22cf98438074bf10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.855265 kubelet[3360]: E0114 01:01:05.855230 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b036cd80ca330436a851b7ee58aa456f3fad0c38507df22cf98438074bf10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.855331 kubelet[3360]: E0114 01:01:05.855286 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b036cd80ca330436a851b7ee58aa456f3fad0c38507df22cf98438074bf10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" Jan 14 01:01:05.855331 kubelet[3360]: E0114 01:01:05.855308 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b036cd80ca330436a851b7ee58aa456f3fad0c38507df22cf98438074bf10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" Jan 14 01:01:05.855409 kubelet[3360]: E0114 01:01:05.855359 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b036cd80ca330436a851b7ee58aa456f3fad0c38507df22cf98438074bf10ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:01:05.864599 containerd[1673]: time="2026-01-14T01:01:05.864548964Z" level=error msg="Failed to destroy network for sandbox \"35085374de65634027fa46d4e44679dd5fce77a39881998d9d2e268529d84f07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.867115 containerd[1673]: time="2026-01-14T01:01:05.867072132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtnc2,Uid:3c6f06cc-5927-444c-9eab-65c1de1f4fb8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35085374de65634027fa46d4e44679dd5fce77a39881998d9d2e268529d84f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.867385 kubelet[3360]: E0114 01:01:05.867317 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35085374de65634027fa46d4e44679dd5fce77a39881998d9d2e268529d84f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.867872 kubelet[3360]: E0114 01:01:05.867424 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35085374de65634027fa46d4e44679dd5fce77a39881998d9d2e268529d84f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rtnc2" Jan 14 01:01:05.867872 kubelet[3360]: E0114 01:01:05.867448 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35085374de65634027fa46d4e44679dd5fce77a39881998d9d2e268529d84f07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rtnc2" Jan 14 01:01:05.867872 kubelet[3360]: E0114 01:01:05.867515 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rtnc2_kube-system(3c6f06cc-5927-444c-9eab-65c1de1f4fb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rtnc2_kube-system(3c6f06cc-5927-444c-9eab-65c1de1f4fb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35085374de65634027fa46d4e44679dd5fce77a39881998d9d2e268529d84f07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rtnc2" podUID="3c6f06cc-5927-444c-9eab-65c1de1f4fb8" Jan 14 01:01:05.876828 containerd[1673]: time="2026-01-14T01:01:05.876787241Z" level=error msg="Failed to destroy network for sandbox \"d05b0c00f150cfd93ef1b6017f28360cef8d45dfc7da720966d89d46b3cff5dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.879043 containerd[1673]: time="2026-01-14T01:01:05.879005248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-94c6f8879-n2nnd,Uid:aa2a7902-d245-486d-bc45-29c607d6f794,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05b0c00f150cfd93ef1b6017f28360cef8d45dfc7da720966d89d46b3cff5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.879437 kubelet[3360]: E0114 01:01:05.879398 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05b0c00f150cfd93ef1b6017f28360cef8d45dfc7da720966d89d46b3cff5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.879518 kubelet[3360]: E0114 01:01:05.879458 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05b0c00f150cfd93ef1b6017f28360cef8d45dfc7da720966d89d46b3cff5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-94c6f8879-n2nnd" Jan 14 01:01:05.879518 kubelet[3360]: E0114 01:01:05.879485 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d05b0c00f150cfd93ef1b6017f28360cef8d45dfc7da720966d89d46b3cff5dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-94c6f8879-n2nnd" Jan 14 01:01:05.879571 kubelet[3360]: E0114 01:01:05.879530 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-94c6f8879-n2nnd_calico-system(aa2a7902-d245-486d-bc45-29c607d6f794)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-94c6f8879-n2nnd_calico-system(aa2a7902-d245-486d-bc45-29c607d6f794)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d05b0c00f150cfd93ef1b6017f28360cef8d45dfc7da720966d89d46b3cff5dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-94c6f8879-n2nnd" podUID="aa2a7902-d245-486d-bc45-29c607d6f794" Jan 14 01:01:05.892510 containerd[1673]: time="2026-01-14T01:01:05.892462290Z" level=error msg="Failed to destroy network for sandbox \"773cfc2266adba861def2cad4b2f4c2bcb38107607285f003154f8a8347213d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.893491 containerd[1673]: time="2026-01-14T01:01:05.893452493Z" level=error msg="Failed to destroy network for sandbox \"26f5476289a3c025ce4e0281918c131105cd30def4e71cc5016ef63190578007\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.895087 containerd[1673]: time="2026-01-14T01:01:05.894886297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-6x9dt,Uid:19f24365-565a-4989-ac4f-5964349e35e5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"773cfc2266adba861def2cad4b2f4c2bcb38107607285f003154f8a8347213d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.895334 kubelet[3360]: E0114 01:01:05.895282 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"773cfc2266adba861def2cad4b2f4c2bcb38107607285f003154f8a8347213d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.895387 kubelet[3360]: E0114 01:01:05.895357 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"773cfc2266adba861def2cad4b2f4c2bcb38107607285f003154f8a8347213d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" Jan 14 01:01:05.895412 kubelet[3360]: E0114 01:01:05.895394 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"773cfc2266adba861def2cad4b2f4c2bcb38107607285f003154f8a8347213d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" Jan 14 01:01:05.895497 kubelet[3360]: E0114 01:01:05.895453 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9bc96f9cf-6x9dt_calico-apiserver(19f24365-565a-4989-ac4f-5964349e35e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9bc96f9cf-6x9dt_calico-apiserver(19f24365-565a-4989-ac4f-5964349e35e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"773cfc2266adba861def2cad4b2f4c2bcb38107607285f003154f8a8347213d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:01:05.897052 containerd[1673]: time="2026-01-14T01:01:05.896976343Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5456h,Uid:3fa9e714-815e-4dce-822e-1a57156e24fa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5476289a3c025ce4e0281918c131105cd30def4e71cc5016ef63190578007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.897203 kubelet[3360]: E0114 01:01:05.897171 3360 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5476289a3c025ce4e0281918c131105cd30def4e71cc5016ef63190578007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 01:01:05.897242 kubelet[3360]: E0114 01:01:05.897216 3360 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5476289a3c025ce4e0281918c131105cd30def4e71cc5016ef63190578007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5456h" Jan 14 01:01:05.897275 kubelet[3360]: E0114 01:01:05.897234 3360 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5476289a3c025ce4e0281918c131105cd30def4e71cc5016ef63190578007\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5456h" Jan 14 01:01:05.897302 kubelet[3360]: E0114 01:01:05.897273 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5456h_calico-system(3fa9e714-815e-4dce-822e-1a57156e24fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5456h_calico-system(3fa9e714-815e-4dce-822e-1a57156e24fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f5476289a3c025ce4e0281918c131105cd30def4e71cc5016ef63190578007\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:01:06.321315 containerd[1673]: time="2026-01-14T01:01:06.321215643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 01:01:08.989625 kubelet[3360]: I0114 01:01:08.988907 3360 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 01:01:09.015000 audit[4464]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:09.016939 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:01:09.017008 kernel: audit: type=1325 audit(1768352469.015:573): table=filter:119 family=2 entries=21 op=nft_register_rule pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:09.015000 audit[4464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd71ca0f0 a2=0 a3=1 items=0 ppid=3494 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:09.022994 kernel: audit: type=1300 audit(1768352469.015:573): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd71ca0f0 a2=0 a3=1 items=0 ppid=3494 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:09.015000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:09.025297 kernel: audit: type=1327 audit(1768352469.015:573): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:09.024000 audit[4464]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:09.028946 kernel: audit: type=1325 audit(1768352469.024:574): table=nat:120 family=2 entries=19 op=nft_register_chain pid=4464 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:09.024000 audit[4464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd71ca0f0 a2=0 a3=1 items=0 ppid=3494 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:09.033821 kernel: audit: type=1300 audit(1768352469.024:574): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd71ca0f0 a2=0 a3=1 items=0 ppid=3494 pid=4464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:09.024000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:09.035899 kernel: audit: type=1327 audit(1768352469.024:574): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:13.266592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928308228.mount: Deactivated successfully. Jan 14 01:01:13.294038 containerd[1673]: time="2026-01-14T01:01:13.293982968Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:01:13.295669 containerd[1673]: time="2026-01-14T01:01:13.295622573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 01:01:13.297406 containerd[1673]: time="2026-01-14T01:01:13.297358698Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:01:13.299872 containerd[1673]: time="2026-01-14T01:01:13.299828105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 01:01:13.300337 containerd[1673]: time="2026-01-14T01:01:13.300304707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.979012064s" Jan 14 01:01:13.300424 containerd[1673]: time="2026-01-14T01:01:13.300409307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 01:01:13.311319 containerd[1673]: time="2026-01-14T01:01:13.311282061Z" level=info msg="CreateContainer within sandbox \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 01:01:13.323932 containerd[1673]: time="2026-01-14T01:01:13.323891059Z" level=info msg="Container 41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:01:13.333134 containerd[1673]: time="2026-01-14T01:01:13.333098887Z" level=info msg="CreateContainer within sandbox \"841624b7629eb7e77527a85ac61665bb3895da6d20bf8b432f68f1a63f33f7a8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983\"" Jan 14 01:01:13.333565 containerd[1673]: time="2026-01-14T01:01:13.333514329Z" level=info msg="StartContainer for \"41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983\"" Jan 14 01:01:13.335406 containerd[1673]: time="2026-01-14T01:01:13.335379014Z" level=info msg="connecting to shim 41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983" address="unix:///run/containerd/s/1cd836dbd8fbaa88e45a65013eaf48034df7583225a9ded79f1513f288f2c16a" protocol=ttrpc version=3 Jan 14 01:01:13.363577 systemd[1]: Started cri-containerd-41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983.scope - libcontainer container 41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983. Jan 14 01:01:13.406000 audit: BPF prog-id=172 op=LOAD Jan 14 01:01:13.406000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:13.412138 kernel: audit: type=1334 audit(1768352473.406:575): prog-id=172 op=LOAD Jan 14 01:01:13.412198 kernel: audit: type=1300 audit(1768352473.406:575): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3934 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:13.412222 kernel: audit: type=1327 audit(1768352473.406:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431643263343332386438613138646338313362623964616438376436 Jan 14 01:01:13.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431643263343332386438613138646338313362623964616438376436 Jan 14 01:01:13.406000 audit: BPF prog-id=173 op=LOAD Jan 14 01:01:13.416532 kernel: audit: type=1334 audit(1768352473.406:576): prog-id=173 op=LOAD Jan 14 01:01:13.406000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3934 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:13.406000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431643263343332386438613138646338313362623964616438376436 Jan 14 01:01:13.411000 audit: BPF prog-id=173 op=UNLOAD Jan 14 01:01:13.411000 audit[4471]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:13.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431643263343332386438613138646338313362623964616438376436 Jan 14 01:01:13.411000 audit: BPF prog-id=172 op=UNLOAD Jan 14 01:01:13.411000 audit[4471]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3934 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:13.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431643263343332386438613138646338313362623964616438376436 Jan 14 01:01:13.411000 audit: BPF prog-id=174 op=LOAD Jan 14 01:01:13.411000 audit[4471]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3934 pid=4471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:13.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431643263343332386438613138646338313362623964616438376436 Jan 14 01:01:13.437826 containerd[1673]: time="2026-01-14T01:01:13.437783368Z" level=info msg="StartContainer for \"41d2c4328d8a18dc813bb9dad87d6c4ed6a2d1fcc42581df80f689d4b343a983\" returns successfully" Jan 14 01:01:13.577108 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 01:01:13.577679 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 01:01:13.810978 kubelet[3360]: I0114 01:01:13.810895 3360 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwslj\" (UniqueName: \"kubernetes.io/projected/aa2a7902-d245-486d-bc45-29c607d6f794-kube-api-access-hwslj\") pod \"aa2a7902-d245-486d-bc45-29c607d6f794\" (UID: \"aa2a7902-d245-486d-bc45-29c607d6f794\") " Jan 14 01:01:13.811658 kubelet[3360]: I0114 01:01:13.811639 3360 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-ca-bundle\") pod \"aa2a7902-d245-486d-bc45-29c607d6f794\" (UID: \"aa2a7902-d245-486d-bc45-29c607d6f794\") " Jan 14 01:01:13.811748 kubelet[3360]: I0114 01:01:13.811676 3360 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-backend-key-pair\") pod \"aa2a7902-d245-486d-bc45-29c607d6f794\" (UID: \"aa2a7902-d245-486d-bc45-29c607d6f794\") " Jan 14 01:01:13.812754 kubelet[3360]: I0114 01:01:13.812202 3360 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aa2a7902-d245-486d-bc45-29c607d6f794" (UID: "aa2a7902-d245-486d-bc45-29c607d6f794"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 01:01:13.814192 kubelet[3360]: I0114 01:01:13.814156 3360 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2a7902-d245-486d-bc45-29c607d6f794-kube-api-access-hwslj" (OuterVolumeSpecName: "kube-api-access-hwslj") pod "aa2a7902-d245-486d-bc45-29c607d6f794" (UID: "aa2a7902-d245-486d-bc45-29c607d6f794"). InnerVolumeSpecName "kube-api-access-hwslj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 01:01:13.814267 kubelet[3360]: I0114 01:01:13.814229 3360 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aa2a7902-d245-486d-bc45-29c607d6f794" (UID: "aa2a7902-d245-486d-bc45-29c607d6f794"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 01:01:13.912452 kubelet[3360]: I0114 01:01:13.912343 3360 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hwslj\" (UniqueName: \"kubernetes.io/projected/aa2a7902-d245-486d-bc45-29c607d6f794-kube-api-access-hwslj\") on node \"ci-4547-0-0-n-a666ba3d92\" DevicePath \"\"" Jan 14 01:01:13.912452 kubelet[3360]: I0114 01:01:13.912431 3360 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-ca-bundle\") on node \"ci-4547-0-0-n-a666ba3d92\" DevicePath \"\"" Jan 14 01:01:13.912452 kubelet[3360]: I0114 01:01:13.912465 3360 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa2a7902-d245-486d-bc45-29c607d6f794-whisker-backend-key-pair\") on node \"ci-4547-0-0-n-a666ba3d92\" DevicePath \"\"" Jan 14 01:01:14.223752 systemd[1]: Removed slice kubepods-besteffort-podaa2a7902_d245_486d_bc45_29c607d6f794.slice - libcontainer container kubepods-besteffort-podaa2a7902_d245_486d_bc45_29c607d6f794.slice. Jan 14 01:01:14.266745 systemd[1]: var-lib-kubelet-pods-aa2a7902\x2dd245\x2d486d\x2dbc45\x2d29c607d6f794-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhwslj.mount: Deactivated successfully. Jan 14 01:01:14.266840 systemd[1]: var-lib-kubelet-pods-aa2a7902\x2dd245\x2d486d\x2dbc45\x2d29c607d6f794-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 01:01:14.382062 kubelet[3360]: I0114 01:01:14.381996 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2gfzq" podStartSLOduration=1.9647742 podStartE2EDuration="23.381978781s" podCreationTimestamp="2026-01-14 01:00:51 +0000 UTC" firstStartedPulling="2026-01-14 01:00:51.883912608 +0000 UTC m=+23.754042463" lastFinishedPulling="2026-01-14 01:01:13.301117189 +0000 UTC m=+45.171247044" observedRunningTime="2026-01-14 01:01:14.368075059 +0000 UTC m=+46.238204914" watchObservedRunningTime="2026-01-14 01:01:14.381978781 +0000 UTC m=+46.252108596" Jan 14 01:01:14.422472 systemd[1]: Created slice kubepods-besteffort-pode7eb783d_4ded_4f2b_bd86_b988e9af5765.slice - libcontainer container kubepods-besteffort-pode7eb783d_4ded_4f2b_bd86_b988e9af5765.slice. Jan 14 01:01:14.517493 kubelet[3360]: I0114 01:01:14.517398 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7eb783d-4ded-4f2b-bd86-b988e9af5765-whisker-backend-key-pair\") pod \"whisker-85d9c4c575-qvf7c\" (UID: \"e7eb783d-4ded-4f2b-bd86-b988e9af5765\") " pod="calico-system/whisker-85d9c4c575-qvf7c" Jan 14 01:01:14.517630 kubelet[3360]: I0114 01:01:14.517522 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2knrg\" (UniqueName: \"kubernetes.io/projected/e7eb783d-4ded-4f2b-bd86-b988e9af5765-kube-api-access-2knrg\") pod \"whisker-85d9c4c575-qvf7c\" (UID: \"e7eb783d-4ded-4f2b-bd86-b988e9af5765\") " pod="calico-system/whisker-85d9c4c575-qvf7c" Jan 14 01:01:14.517630 kubelet[3360]: I0114 01:01:14.517544 3360 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7eb783d-4ded-4f2b-bd86-b988e9af5765-whisker-ca-bundle\") pod \"whisker-85d9c4c575-qvf7c\" (UID: \"e7eb783d-4ded-4f2b-bd86-b988e9af5765\") " pod="calico-system/whisker-85d9c4c575-qvf7c" Jan 14 01:01:14.727549 containerd[1673]: time="2026-01-14T01:01:14.727483560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85d9c4c575-qvf7c,Uid:e7eb783d-4ded-4f2b-bd86-b988e9af5765,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:14.862143 systemd-networkd[1584]: caliea8db8b334b: Link UP Jan 14 01:01:14.864119 systemd-networkd[1584]: caliea8db8b334b: Gained carrier Jan 14 01:01:14.882380 containerd[1673]: 2026-01-14 01:01:14.747 [INFO][4562] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 01:01:14.882380 containerd[1673]: 2026-01-14 01:01:14.765 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0 whisker-85d9c4c575- calico-system e7eb783d-4ded-4f2b-bd86-b988e9af5765 932 0 2026-01-14 01:01:14 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85d9c4c575 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 whisker-85d9c4c575-qvf7c eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliea8db8b334b [] [] }} ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-" Jan 14 01:01:14.882380 containerd[1673]: 2026-01-14 01:01:14.765 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.882380 containerd[1673]: 2026-01-14 01:01:14.808 [INFO][4576] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" HandleID="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Workload="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.808 [INFO][4576] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" HandleID="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Workload="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"whisker-85d9c4c575-qvf7c", "timestamp":"2026-01-14 01:01:14.808371688 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.808 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.808 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.808 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.818 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.823 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.828 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.829 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882593 containerd[1673]: 2026-01-14 01:01:14.832 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.833 [INFO][4576] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.837 [INFO][4576] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.843 [INFO][4576] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.849 [INFO][4576] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.1/26] block=192.168.9.0/26 handle="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.849 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.1/26] handle="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.849 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:14.882808 containerd[1673]: 2026-01-14 01:01:14.849 [INFO][4576] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.1/26] IPv6=[] ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" HandleID="k8s-pod-network.d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Workload="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.882932 containerd[1673]: 2026-01-14 01:01:14.852 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0", GenerateName:"whisker-85d9c4c575-", Namespace:"calico-system", SelfLink:"", UID:"e7eb783d-4ded-4f2b-bd86-b988e9af5765", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85d9c4c575", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"whisker-85d9c4c575-qvf7c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliea8db8b334b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:14.882932 containerd[1673]: 2026-01-14 01:01:14.852 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.1/32] ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.883000 containerd[1673]: 2026-01-14 01:01:14.852 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea8db8b334b ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.883000 containerd[1673]: 2026-01-14 01:01:14.864 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.883039 containerd[1673]: 2026-01-14 01:01:14.865 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0", GenerateName:"whisker-85d9c4c575-", Namespace:"calico-system", SelfLink:"", UID:"e7eb783d-4ded-4f2b-bd86-b988e9af5765", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 1, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85d9c4c575", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc", Pod:"whisker-85d9c4c575-qvf7c", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.9.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliea8db8b334b", MAC:"ca:f7:ec:33:64:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:14.883082 containerd[1673]: 2026-01-14 01:01:14.879 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" Namespace="calico-system" Pod="whisker-85d9c4c575-qvf7c" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-whisker--85d9c4c575--qvf7c-eth0" Jan 14 01:01:14.920082 containerd[1673]: time="2026-01-14T01:01:14.920003950Z" level=info msg="connecting to shim d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc" address="unix:///run/containerd/s/70b5cb43e54b3723c86ef964f2f10c86f4804356a253344e3464a3925ea14f6a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:14.953940 systemd[1]: Started cri-containerd-d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc.scope - libcontainer container d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc. Jan 14 01:01:14.969000 audit: BPF prog-id=175 op=LOAD Jan 14 01:01:14.972088 kernel: kauditd_printk_skb: 11 callbacks suppressed Jan 14 01:01:14.972133 kernel: audit: type=1334 audit(1768352474.969:580): prog-id=175 op=LOAD Jan 14 01:01:14.972157 kernel: audit: type=1334 audit(1768352474.969:581): prog-id=176 op=LOAD Jan 14 01:01:14.969000 audit: BPF prog-id=176 op=LOAD Jan 14 01:01:14.969000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.979822 kernel: audit: type=1300 audit(1768352474.969:581): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.983355 kernel: audit: type=1327 audit(1768352474.969:581): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.969000 audit: BPF prog-id=176 op=UNLOAD Jan 14 01:01:14.985190 kernel: audit: type=1334 audit(1768352474.969:582): prog-id=176 op=UNLOAD Jan 14 01:01:14.969000 audit[4707]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.988969 kernel: audit: type=1300 audit(1768352474.969:582): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.989288 kernel: audit: type=1327 audit(1768352474.969:582): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.970000 audit: BPF prog-id=177 op=LOAD Jan 14 01:01:14.994762 kernel: audit: type=1334 audit(1768352474.970:583): prog-id=177 op=LOAD Jan 14 01:01:14.970000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.998018 kernel: audit: type=1300 audit(1768352474.970:583): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.970000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:15.002329 kernel: audit: type=1327 audit(1768352474.970:583): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.971000 audit: BPF prog-id=178 op=LOAD Jan 14 01:01:14.971000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.973000 audit: BPF prog-id=178 op=UNLOAD Jan 14 01:01:14.973000 audit[4707]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.973000 audit: BPF prog-id=177 op=UNLOAD Jan 14 01:01:14.973000 audit[4707]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:14.973000 audit: BPF prog-id=179 op=LOAD Jan 14 01:01:14.973000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4693 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:14.973000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6438613063643837353533626465373964316661373766613964326437 Jan 14 01:01:15.028113 containerd[1673]: time="2026-01-14T01:01:15.028069841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85d9c4c575-qvf7c,Uid:e7eb783d-4ded-4f2b-bd86-b988e9af5765,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8a0cd87553bde79d1fa77fa9d2d792ff2b00d96bd7f72711579e2195d35dedc\"" Jan 14 01:01:15.030312 containerd[1673]: time="2026-01-14T01:01:15.030284808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:01:15.082000 audit: BPF prog-id=180 op=LOAD Jan 14 01:01:15.082000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0263688 a2=98 a3=ffffd0263678 items=0 ppid=4604 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.082000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:01:15.082000 audit: BPF prog-id=180 op=UNLOAD Jan 14 01:01:15.082000 audit[4765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd0263658 a3=0 items=0 ppid=4604 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.082000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:01:15.082000 audit: BPF prog-id=181 op=LOAD Jan 14 01:01:15.082000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0263538 a2=74 a3=95 items=0 ppid=4604 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.082000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:01:15.082000 audit: BPF prog-id=181 op=UNLOAD Jan 14 01:01:15.082000 audit[4765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4604 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.082000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:01:15.082000 audit: BPF prog-id=182 op=LOAD Jan 14 01:01:15.082000 audit[4765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd0263568 a2=40 a3=ffffd0263598 items=0 ppid=4604 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.082000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:01:15.082000 audit: BPF prog-id=182 op=UNLOAD Jan 14 01:01:15.082000 audit[4765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd0263598 items=0 ppid=4604 pid=4765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.082000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 01:01:15.084000 audit: BPF prog-id=183 op=LOAD Jan 14 01:01:15.084000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff65d7918 a2=98 a3=fffff65d7908 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.084000 audit: BPF prog-id=183 op=UNLOAD Jan 14 01:01:15.084000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff65d78e8 a3=0 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.084000 audit: BPF prog-id=184 op=LOAD Jan 14 01:01:15.084000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff65d75a8 a2=74 a3=95 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.084000 audit: BPF prog-id=184 op=UNLOAD Jan 14 01:01:15.084000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.084000 audit: BPF prog-id=185 op=LOAD Jan 14 01:01:15.084000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff65d7608 a2=94 a3=2 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.084000 audit: BPF prog-id=185 op=UNLOAD Jan 14 01:01:15.084000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.184000 audit: BPF prog-id=186 op=LOAD Jan 14 01:01:15.184000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff65d75c8 a2=40 a3=fffff65d75f8 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.184000 audit: BPF prog-id=186 op=UNLOAD Jan 14 01:01:15.184000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff65d75f8 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.184000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=187 op=LOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff65d75d8 a2=94 a3=4 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=187 op=UNLOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=188 op=LOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff65d7418 a2=94 a3=5 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=188 op=UNLOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=189 op=LOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff65d7648 a2=94 a3=6 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=189 op=UNLOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=190 op=LOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff65d6e18 a2=94 a3=83 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.195000 audit: BPF prog-id=191 op=LOAD Jan 14 01:01:15.195000 audit[4766]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff65d6bd8 a2=94 a3=2 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.195000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.196000 audit: BPF prog-id=191 op=UNLOAD Jan 14 01:01:15.196000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.196000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.196000 audit: BPF prog-id=190 op=UNLOAD Jan 14 01:01:15.196000 audit[4766]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=f31e620 a3=f311b00 items=0 ppid=4604 pid=4766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.196000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 01:01:15.205000 audit: BPF prog-id=192 op=LOAD Jan 14 01:01:15.205000 audit[4769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc72bdb8 a2=98 a3=ffffdc72bda8 items=0 ppid=4604 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.205000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:01:15.205000 audit: BPF prog-id=192 op=UNLOAD Jan 14 01:01:15.205000 audit[4769]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdc72bd88 a3=0 items=0 ppid=4604 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.205000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:01:15.205000 audit: BPF prog-id=193 op=LOAD Jan 14 01:01:15.205000 audit[4769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc72bc68 a2=74 a3=95 items=0 ppid=4604 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.205000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:01:15.205000 audit: BPF prog-id=193 op=UNLOAD Jan 14 01:01:15.205000 audit[4769]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4604 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.205000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:01:15.205000 audit: BPF prog-id=194 op=LOAD Jan 14 01:01:15.205000 audit[4769]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdc72bc98 a2=40 a3=ffffdc72bcc8 items=0 ppid=4604 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.205000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:01:15.206000 audit: BPF prog-id=194 op=UNLOAD Jan 14 01:01:15.206000 audit[4769]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdc72bcc8 items=0 ppid=4604 pid=4769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.206000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 01:01:15.262450 systemd-networkd[1584]: vxlan.calico: Link UP Jan 14 01:01:15.262459 systemd-networkd[1584]: vxlan.calico: Gained carrier Jan 14 01:01:15.269000 audit: BPF prog-id=195 op=LOAD Jan 14 01:01:15.269000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec3b2428 a2=98 a3=ffffec3b2418 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.269000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.269000 audit: BPF prog-id=195 op=UNLOAD Jan 14 01:01:15.269000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffec3b23f8 a3=0 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.269000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=196 op=LOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec3b2108 a2=74 a3=95 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=196 op=UNLOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=197 op=LOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec3b2168 a2=94 a3=2 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=197 op=UNLOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=198 op=LOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec3b1fe8 a2=40 a3=ffffec3b2018 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=198 op=UNLOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffec3b2018 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=199 op=LOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec3b2138 a2=94 a3=b7 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.270000 audit: BPF prog-id=199 op=UNLOAD Jan 14 01:01:15.270000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.270000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.273000 audit: BPF prog-id=200 op=LOAD Jan 14 01:01:15.273000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec3b17e8 a2=94 a3=2 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.273000 audit: BPF prog-id=200 op=UNLOAD Jan 14 01:01:15.273000 audit[4795]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.273000 audit: BPF prog-id=201 op=LOAD Jan 14 01:01:15.273000 audit[4795]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec3b1978 a2=94 a3=30 items=0 ppid=4604 pid=4795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.273000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 01:01:15.276000 audit: BPF prog-id=202 op=LOAD Jan 14 01:01:15.276000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffffa1b988 a2=98 a3=ffffffa1b978 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.276000 audit: BPF prog-id=202 op=UNLOAD Jan 14 01:01:15.276000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffffa1b958 a3=0 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.276000 audit: BPF prog-id=203 op=LOAD Jan 14 01:01:15.276000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffffa1b618 a2=74 a3=95 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.276000 audit: BPF prog-id=203 op=UNLOAD Jan 14 01:01:15.276000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.276000 audit: BPF prog-id=204 op=LOAD Jan 14 01:01:15.276000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffffa1b678 a2=94 a3=2 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.276000 audit: BPF prog-id=204 op=UNLOAD Jan 14 01:01:15.276000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.276000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.377035 containerd[1673]: time="2026-01-14T01:01:15.376994030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:15.378000 audit: BPF prog-id=205 op=LOAD Jan 14 01:01:15.378000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffffa1b638 a2=40 a3=ffffffa1b668 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.378000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.378000 audit: BPF prog-id=205 op=UNLOAD Jan 14 01:01:15.378000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffffa1b668 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.378000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.380991 containerd[1673]: time="2026-01-14T01:01:15.379342437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:01:15.380991 containerd[1673]: time="2026-01-14T01:01:15.379421237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:15.381602 kubelet[3360]: E0114 01:01:15.381152 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:01:15.381602 kubelet[3360]: E0114 01:01:15.381224 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:01:15.381920 kubelet[3360]: E0114 01:01:15.381352 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2290a9e2f64b4bbd8cad9f1beca215c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:15.384464 containerd[1673]: time="2026-01-14T01:01:15.384141172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:01:15.388000 audit: BPF prog-id=206 op=LOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffffa1b648 a2=94 a3=4 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.388000 audit: BPF prog-id=206 op=UNLOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.388000 audit: BPF prog-id=207 op=LOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffffa1b488 a2=94 a3=5 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.388000 audit: BPF prog-id=207 op=UNLOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.388000 audit: BPF prog-id=208 op=LOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffffa1b6b8 a2=94 a3=6 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.388000 audit: BPF prog-id=208 op=UNLOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.388000 audit: BPF prog-id=209 op=LOAD Jan 14 01:01:15.388000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffffa1ae88 a2=94 a3=83 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.388000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.389000 audit: BPF prog-id=210 op=LOAD Jan 14 01:01:15.389000 audit[4800]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffffa1ac48 a2=94 a3=2 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.389000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.389000 audit: BPF prog-id=210 op=UNLOAD Jan 14 01:01:15.389000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.389000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.389000 audit: BPF prog-id=209 op=UNLOAD Jan 14 01:01:15.389000 audit[4800]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3af73620 a3=3af66b00 items=0 ppid=4604 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.389000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 01:01:15.397000 audit: BPF prog-id=201 op=UNLOAD Jan 14 01:01:15.397000 audit[4604]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000a12540 a2=0 a3=0 items=0 ppid=4589 pid=4604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.397000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 01:01:15.450000 audit[4850]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4850 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:15.450000 audit[4850]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffdf242640 a2=0 a3=ffff80b70fa8 items=0 ppid=4604 pid=4850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.450000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:15.453000 audit[4853]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4853 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:15.453000 audit[4853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe430cca0 a2=0 a3=ffffbd1aefa8 items=0 ppid=4604 pid=4853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.453000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:15.465000 audit[4849]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4849 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:15.465000 audit[4849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe2ccea70 a2=0 a3=ffffb9b34fa8 items=0 ppid=4604 pid=4849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.465000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:15.466000 audit[4851]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4851 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:15.466000 audit[4851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe23bf050 a2=0 a3=ffffa03a5fa8 items=0 ppid=4604 pid=4851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:15.466000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:15.720409 containerd[1673]: time="2026-01-14T01:01:15.720189281Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:15.721673 containerd[1673]: time="2026-01-14T01:01:15.721635406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:01:15.721749 containerd[1673]: time="2026-01-14T01:01:15.721643486Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:15.721982 kubelet[3360]: E0114 01:01:15.721943 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:01:15.722042 kubelet[3360]: E0114 01:01:15.721992 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:01:15.722389 kubelet[3360]: E0114 01:01:15.722136 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:15.723728 kubelet[3360]: E0114 01:01:15.723668 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:01:16.218284 containerd[1673]: time="2026-01-14T01:01:16.218227687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqlb,Uid:156d3f01-7b19-463c-9dd8-133de12239c1,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:16.219789 kubelet[3360]: I0114 01:01:16.219757 3360 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2a7902-d245-486d-bc45-29c607d6f794" path="/var/lib/kubelet/pods/aa2a7902-d245-486d-bc45-29c607d6f794/volumes" Jan 14 01:01:16.323115 systemd-networkd[1584]: cali8f5dcbb71f3: Link UP Jan 14 01:01:16.323456 systemd-networkd[1584]: cali8f5dcbb71f3: Gained carrier Jan 14 01:01:16.339499 containerd[1673]: 2026-01-14 01:01:16.257 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0 csi-node-driver- calico-system 156d3f01-7b19-463c-9dd8-133de12239c1 740 0 2026-01-14 01:00:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 csi-node-driver-7cqlb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f5dcbb71f3 [] [] }} ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-" Jan 14 01:01:16.339499 containerd[1673]: 2026-01-14 01:01:16.257 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.339499 containerd[1673]: 2026-01-14 01:01:16.278 [INFO][4878] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" HandleID="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Workload="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.278 [INFO][4878] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" HandleID="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Workload="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e5590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"csi-node-driver-7cqlb", "timestamp":"2026-01-14 01:01:16.278770553 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.278 [INFO][4878] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.279 [INFO][4878] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.279 [INFO][4878] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.289 [INFO][4878] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.294 [INFO][4878] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.298 [INFO][4878] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.300 [INFO][4878] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339721 containerd[1673]: 2026-01-14 01:01:16.302 [INFO][4878] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.302 [INFO][4878] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.304 [INFO][4878] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53 Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.310 [INFO][4878] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.318 [INFO][4878] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.2/26] block=192.168.9.0/26 handle="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.318 [INFO][4878] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.2/26] handle="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.319 [INFO][4878] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:16.339913 containerd[1673]: 2026-01-14 01:01:16.319 [INFO][4878] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.2/26] IPv6=[] ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" HandleID="k8s-pod-network.663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Workload="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.340032 containerd[1673]: 2026-01-14 01:01:16.320 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"156d3f01-7b19-463c-9dd8-133de12239c1", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"csi-node-driver-7cqlb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5dcbb71f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:16.340079 containerd[1673]: 2026-01-14 01:01:16.320 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.2/32] ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.340079 containerd[1673]: 2026-01-14 01:01:16.320 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5dcbb71f3 ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.340079 containerd[1673]: 2026-01-14 01:01:16.323 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.340133 containerd[1673]: 2026-01-14 01:01:16.324 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"156d3f01-7b19-463c-9dd8-133de12239c1", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53", Pod:"csi-node-driver-7cqlb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.9.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5dcbb71f3", MAC:"f2:37:cf:27:0b:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:16.340177 containerd[1673]: 2026-01-14 01:01:16.336 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" Namespace="calico-system" Pod="csi-node-driver-7cqlb" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-csi--node--driver--7cqlb-eth0" Jan 14 01:01:16.350000 audit[4894]: NETFILTER_CFG table=filter:125 family=2 entries=36 op=nft_register_chain pid=4894 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:16.350000 audit[4894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffe9f0b1f0 a2=0 a3=ffffb4dc2fa8 items=0 ppid=4604 pid=4894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.350000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:16.356534 kubelet[3360]: E0114 01:01:16.354478 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:01:16.362939 containerd[1673]: time="2026-01-14T01:01:16.362891491Z" level=info msg="connecting to shim 663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53" address="unix:///run/containerd/s/ea275c207cc4fa55d7d250980420c54d30928da8d52b95b6756be2daf35e7937" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:16.383000 audit[4929]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=4929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:16.383000 audit[4929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffc38fac0 a2=0 a3=1 items=0 ppid=3494 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.383000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:16.393000 audit[4929]: NETFILTER_CFG table=nat:127 family=2 entries=14 op=nft_register_rule pid=4929 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:16.393000 audit[4929]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffc38fac0 a2=0 a3=1 items=0 ppid=3494 pid=4929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:16.394878 systemd[1]: Started cri-containerd-663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53.scope - libcontainer container 663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53. Jan 14 01:01:16.403000 audit: BPF prog-id=211 op=LOAD Jan 14 01:01:16.403000 audit: BPF prog-id=212 op=LOAD Jan 14 01:01:16.403000 audit[4915]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.403000 audit: BPF prog-id=212 op=UNLOAD Jan 14 01:01:16.403000 audit[4915]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.404000 audit: BPF prog-id=213 op=LOAD Jan 14 01:01:16.404000 audit[4915]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.404000 audit: BPF prog-id=214 op=LOAD Jan 14 01:01:16.404000 audit[4915]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.404000 audit: BPF prog-id=214 op=UNLOAD Jan 14 01:01:16.404000 audit[4915]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.404000 audit: BPF prog-id=213 op=UNLOAD Jan 14 01:01:16.404000 audit[4915]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.404000 audit: BPF prog-id=215 op=LOAD Jan 14 01:01:16.404000 audit[4915]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4902 pid=4915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:16.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3636336239653462633437626262313232393935363361386339383262 Jan 14 01:01:16.420664 containerd[1673]: time="2026-01-14T01:01:16.420612347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7cqlb,Uid:156d3f01-7b19-463c-9dd8-133de12239c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"663b9e4bc47bbb12299563a8c982b3d9db69f4f36b78a4e9ad5a3d54566faa53\"" Jan 14 01:01:16.422763 containerd[1673]: time="2026-01-14T01:01:16.422731514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:01:16.449905 systemd-networkd[1584]: caliea8db8b334b: Gained IPv6LL Jan 14 01:01:16.514840 systemd-networkd[1584]: vxlan.calico: Gained IPv6LL Jan 14 01:01:16.754817 containerd[1673]: time="2026-01-14T01:01:16.754743491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:16.756350 containerd[1673]: time="2026-01-14T01:01:16.756306336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:01:16.756504 containerd[1673]: time="2026-01-14T01:01:16.756408616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:16.756723 kubelet[3360]: E0114 01:01:16.756661 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:01:16.756985 kubelet[3360]: E0114 01:01:16.756741 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:01:16.756985 kubelet[3360]: E0114 01:01:16.756872 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:16.759190 containerd[1673]: time="2026-01-14T01:01:16.759140625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:01:17.090313 containerd[1673]: time="2026-01-14T01:01:17.090259399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:17.091836 containerd[1673]: time="2026-01-14T01:01:17.091797644Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:01:17.091946 containerd[1673]: time="2026-01-14T01:01:17.091887084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:17.092065 kubelet[3360]: E0114 01:01:17.092024 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:01:17.092128 kubelet[3360]: E0114 01:01:17.092069 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:01:17.092253 kubelet[3360]: E0114 01:01:17.092181 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:17.093593 kubelet[3360]: E0114 01:01:17.093554 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:17.356109 kubelet[3360]: E0114 01:01:17.355909 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:17.601902 systemd-networkd[1584]: cali8f5dcbb71f3: Gained IPv6LL Jan 14 01:01:18.219154 containerd[1673]: time="2026-01-14T01:01:18.219106938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-lvk8t,Uid:b8e2c676-74d2-4fbb-b79b-4d5fa6599826,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:01:18.220472 containerd[1673]: time="2026-01-14T01:01:18.219180018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5456h,Uid:3fa9e714-815e-4dce-822e-1a57156e24fa,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:18.220472 containerd[1673]: time="2026-01-14T01:01:18.219116378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn9rq,Uid:87bb732d-d584-4746-964a-074ac60813c7,Namespace:kube-system,Attempt:0,}" Jan 14 01:01:18.359578 kubelet[3360]: E0114 01:01:18.359508 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:18.368661 systemd-networkd[1584]: cali601ebfe7567: Link UP Jan 14 01:01:18.369820 systemd-networkd[1584]: cali601ebfe7567: Gained carrier Jan 14 01:01:18.386249 containerd[1673]: 2026-01-14 01:01:18.285 [INFO][4947] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0 calico-apiserver-9bc96f9cf- calico-apiserver b8e2c676-74d2-4fbb-b79b-4d5fa6599826 856 0 2026-01-14 01:00:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bc96f9cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 calico-apiserver-9bc96f9cf-lvk8t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali601ebfe7567 [] [] }} ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-" Jan 14 01:01:18.386249 containerd[1673]: 2026-01-14 01:01:18.285 [INFO][4947] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.386249 containerd[1673]: 2026-01-14 01:01:18.314 [INFO][4994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" HandleID="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.314 [INFO][4994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" HandleID="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137480), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"calico-apiserver-9bc96f9cf-lvk8t", "timestamp":"2026-01-14 01:01:18.31429663 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.314 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.314 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.314 [INFO][4994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.328 [INFO][4994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.333 [INFO][4994] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.339 [INFO][4994] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.341 [INFO][4994] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386477 containerd[1673]: 2026-01-14 01:01:18.344 [INFO][4994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.345 [INFO][4994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.347 [INFO][4994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701 Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.353 [INFO][4994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.362 [INFO][4994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.3/26] block=192.168.9.0/26 handle="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.362 [INFO][4994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.3/26] handle="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.362 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:18.386647 containerd[1673]: 2026-01-14 01:01:18.362 [INFO][4994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.3/26] IPv6=[] ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" HandleID="k8s-pod-network.15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.386799 containerd[1673]: 2026-01-14 01:01:18.367 [INFO][4947] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0", GenerateName:"calico-apiserver-9bc96f9cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8e2c676-74d2-4fbb-b79b-4d5fa6599826", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bc96f9cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"calico-apiserver-9bc96f9cf-lvk8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali601ebfe7567", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:18.386847 containerd[1673]: 2026-01-14 01:01:18.367 [INFO][4947] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.3/32] ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.386847 containerd[1673]: 2026-01-14 01:01:18.367 [INFO][4947] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali601ebfe7567 ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.386847 containerd[1673]: 2026-01-14 01:01:18.369 [INFO][4947] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.386905 containerd[1673]: 2026-01-14 01:01:18.369 [INFO][4947] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0", GenerateName:"calico-apiserver-9bc96f9cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"b8e2c676-74d2-4fbb-b79b-4d5fa6599826", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bc96f9cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701", Pod:"calico-apiserver-9bc96f9cf-lvk8t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali601ebfe7567", MAC:"6e:6c:e0:8e:e0:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:18.386952 containerd[1673]: 2026-01-14 01:01:18.382 [INFO][4947] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-lvk8t" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--lvk8t-eth0" Jan 14 01:01:18.403000 audit[5027]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=5027 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:18.403000 audit[5027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=fffff8ffdfe0 a2=0 a3=ffffb5c30fa8 items=0 ppid=4604 pid=5027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.403000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:18.413294 containerd[1673]: time="2026-01-14T01:01:18.413193253Z" level=info msg="connecting to shim 15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701" address="unix:///run/containerd/s/ac2e2ab2a660094c15a904c43be956b47424a7a0273fbe82fcc0fa135d3187e3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:18.435084 systemd[1]: Started cri-containerd-15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701.scope - libcontainer container 15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701. Jan 14 01:01:18.449000 audit: BPF prog-id=216 op=LOAD Jan 14 01:01:18.450000 audit: BPF prog-id=217 op=LOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.450000 audit: BPF prog-id=217 op=UNLOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.450000 audit: BPF prog-id=218 op=LOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.450000 audit: BPF prog-id=219 op=LOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.450000 audit: BPF prog-id=219 op=UNLOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.450000 audit: BPF prog-id=218 op=UNLOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.450000 audit: BPF prog-id=220 op=LOAD Jan 14 01:01:18.450000 audit[5048]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5037 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.450000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135653465343136353730653265623863656663333662353363646634 Jan 14 01:01:18.473606 systemd-networkd[1584]: cali62105c2d388: Link UP Jan 14 01:01:18.474800 systemd-networkd[1584]: cali62105c2d388: Gained carrier Jan 14 01:01:18.492342 containerd[1673]: 2026-01-14 01:01:18.289 [INFO][4953] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0 goldmane-666569f655- calico-system 3fa9e714-815e-4dce-822e-1a57156e24fa 859 0 2026-01-14 01:00:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 goldmane-666569f655-5456h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali62105c2d388 [] [] }} ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-" Jan 14 01:01:18.492342 containerd[1673]: 2026-01-14 01:01:18.291 [INFO][4953] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.492342 containerd[1673]: 2026-01-14 01:01:18.321 [INFO][5000] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" HandleID="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Workload="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.321 [INFO][5000] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" HandleID="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Workload="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400058d170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"goldmane-666569f655-5456h", "timestamp":"2026-01-14 01:01:18.321180811 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.321 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.362 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.363 [INFO][5000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.428 [INFO][5000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.433 [INFO][5000] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.438 [INFO][5000] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.441 [INFO][5000] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492541 containerd[1673]: 2026-01-14 01:01:18.443 [INFO][5000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.443 [INFO][5000] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.446 [INFO][5000] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6 Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.450 [INFO][5000] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.457 [INFO][5000] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.4/26] block=192.168.9.0/26 handle="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.457 [INFO][5000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.4/26] handle="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.457 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:18.492782 containerd[1673]: 2026-01-14 01:01:18.457 [INFO][5000] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.4/26] IPv6=[] ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" HandleID="k8s-pod-network.0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Workload="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.492904 containerd[1673]: 2026-01-14 01:01:18.465 [INFO][4953] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3fa9e714-815e-4dce-822e-1a57156e24fa", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"goldmane-666569f655-5456h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62105c2d388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:18.492956 containerd[1673]: 2026-01-14 01:01:18.465 [INFO][4953] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.4/32] ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.492956 containerd[1673]: 2026-01-14 01:01:18.465 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali62105c2d388 ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.492956 containerd[1673]: 2026-01-14 01:01:18.476 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.493009 containerd[1673]: 2026-01-14 01:01:18.477 [INFO][4953] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"3fa9e714-815e-4dce-822e-1a57156e24fa", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6", Pod:"goldmane-666569f655-5456h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.9.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali62105c2d388", MAC:"56:5a:53:b0:92:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:18.493054 containerd[1673]: 2026-01-14 01:01:18.488 [INFO][4953] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" Namespace="calico-system" Pod="goldmane-666569f655-5456h" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-goldmane--666569f655--5456h-eth0" Jan 14 01:01:18.493498 containerd[1673]: time="2026-01-14T01:01:18.492813297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-lvk8t,Uid:b8e2c676-74d2-4fbb-b79b-4d5fa6599826,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"15e4e416570e2eb8cefc36b53cdf4280bb2829c6d766951642f3e81728c98701\"" Jan 14 01:01:18.495365 containerd[1673]: time="2026-01-14T01:01:18.495330464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:01:18.502000 audit[5084]: NETFILTER_CFG table=filter:129 family=2 entries=58 op=nft_register_chain pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:18.502000 audit[5084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30408 a0=3 a1=ffffe1877f50 a2=0 a3=ffff9226ffa8 items=0 ppid=4604 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.502000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:18.519850 containerd[1673]: time="2026-01-14T01:01:18.519811099Z" level=info msg="connecting to shim 0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6" address="unix:///run/containerd/s/6f89063c054f239ce2222bc9fe72d542d335ae3a0c7da785689bf6105f9b2ac7" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:18.550896 systemd[1]: Started cri-containerd-0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6.scope - libcontainer container 0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6. Jan 14 01:01:18.562963 systemd-networkd[1584]: cali89b047d9f76: Link UP Jan 14 01:01:18.563380 systemd-networkd[1584]: cali89b047d9f76: Gained carrier Jan 14 01:01:18.563000 audit: BPF prog-id=221 op=LOAD Jan 14 01:01:18.564000 audit: BPF prog-id=222 op=LOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.564000 audit: BPF prog-id=222 op=UNLOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.564000 audit: BPF prog-id=223 op=LOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.564000 audit: BPF prog-id=224 op=LOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.564000 audit: BPF prog-id=224 op=UNLOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.564000 audit: BPF prog-id=223 op=UNLOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.564000 audit: BPF prog-id=225 op=LOAD Jan 14 01:01:18.564000 audit[5104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=5093 pid=5104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.564000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064656537383734633739313537346663386634386464323266353536 Jan 14 01:01:18.580432 containerd[1673]: 2026-01-14 01:01:18.295 [INFO][4957] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0 coredns-674b8bbfcf- kube-system 87bb732d-d584-4746-964a-074ac60813c7 852 0 2026-01-14 01:00:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 coredns-674b8bbfcf-cn9rq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali89b047d9f76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-" Jan 14 01:01:18.580432 containerd[1673]: 2026-01-14 01:01:18.296 [INFO][4957] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.580432 containerd[1673]: 2026-01-14 01:01:18.326 [INFO][5007] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" HandleID="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Workload="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.326 [INFO][5007] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" HandleID="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Workload="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353720), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"coredns-674b8bbfcf-cn9rq", "timestamp":"2026-01-14 01:01:18.325997026 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.326 [INFO][5007] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.457 [INFO][5007] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.457 [INFO][5007] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.528 [INFO][5007] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.534 [INFO][5007] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.539 [INFO][5007] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.542 [INFO][5007] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.580603 containerd[1673]: 2026-01-14 01:01:18.544 [INFO][5007] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.545 [INFO][5007] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.546 [INFO][5007] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.550 [INFO][5007] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.557 [INFO][5007] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.5/26] block=192.168.9.0/26 handle="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.558 [INFO][5007] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.5/26] handle="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.558 [INFO][5007] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:18.581244 containerd[1673]: 2026-01-14 01:01:18.558 [INFO][5007] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.5/26] IPv6=[] ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" HandleID="k8s-pod-network.94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Workload="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.581374 containerd[1673]: 2026-01-14 01:01:18.560 [INFO][4957] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"87bb732d-d584-4746-964a-074ac60813c7", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"coredns-674b8bbfcf-cn9rq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89b047d9f76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:18.581374 containerd[1673]: 2026-01-14 01:01:18.560 [INFO][4957] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.5/32] ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.581374 containerd[1673]: 2026-01-14 01:01:18.560 [INFO][4957] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89b047d9f76 ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.581374 containerd[1673]: 2026-01-14 01:01:18.564 [INFO][4957] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.581374 containerd[1673]: 2026-01-14 01:01:18.564 [INFO][4957] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"87bb732d-d584-4746-964a-074ac60813c7", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d", Pod:"coredns-674b8bbfcf-cn9rq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali89b047d9f76", MAC:"ae:4d:6d:f2:64:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:18.581374 containerd[1673]: 2026-01-14 01:01:18.576 [INFO][4957] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" Namespace="kube-system" Pod="coredns-674b8bbfcf-cn9rq" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--cn9rq-eth0" Jan 14 01:01:18.595851 containerd[1673]: time="2026-01-14T01:01:18.595819892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5456h,Uid:3fa9e714-815e-4dce-822e-1a57156e24fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dee7874c791574fc8f48dd22f5568f937618d0b1a9292197b71ecb8e1d073a6\"" Jan 14 01:01:18.599000 audit[5139]: NETFILTER_CFG table=filter:130 family=2 entries=56 op=nft_register_chain pid=5139 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:18.599000 audit[5139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27764 a0=3 a1=ffffdf7dc990 a2=0 a3=ffffa45f6fa8 items=0 ppid=4604 pid=5139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.599000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:18.611641 containerd[1673]: time="2026-01-14T01:01:18.611598901Z" level=info msg="connecting to shim 94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d" address="unix:///run/containerd/s/e804f86e1dc85569a6ead155ce6afcbf03d2054d3de94b499305e6a33ff1325c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:18.634069 systemd[1]: Started cri-containerd-94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d.scope - libcontainer container 94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d. Jan 14 01:01:18.642000 audit: BPF prog-id=226 op=LOAD Jan 14 01:01:18.643000 audit: BPF prog-id=227 op=LOAD Jan 14 01:01:18.643000 audit[5160]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.643000 audit: BPF prog-id=227 op=UNLOAD Jan 14 01:01:18.643000 audit[5160]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.643000 audit: BPF prog-id=228 op=LOAD Jan 14 01:01:18.643000 audit[5160]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.643000 audit: BPF prog-id=229 op=LOAD Jan 14 01:01:18.643000 audit[5160]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.643000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.644000 audit: BPF prog-id=229 op=UNLOAD Jan 14 01:01:18.644000 audit[5160]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.644000 audit: BPF prog-id=228 op=UNLOAD Jan 14 01:01:18.644000 audit[5160]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.644000 audit: BPF prog-id=230 op=LOAD Jan 14 01:01:18.644000 audit[5160]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5149 pid=5160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.644000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3934613636333863653634613235393138613933346438373834633235 Jan 14 01:01:18.666662 containerd[1673]: time="2026-01-14T01:01:18.666621189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cn9rq,Uid:87bb732d-d584-4746-964a-074ac60813c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d\"" Jan 14 01:01:18.672478 containerd[1673]: time="2026-01-14T01:01:18.672441367Z" level=info msg="CreateContainer within sandbox \"94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:01:18.680895 containerd[1673]: time="2026-01-14T01:01:18.680848113Z" level=info msg="Container 5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:01:18.686772 containerd[1673]: time="2026-01-14T01:01:18.686741891Z" level=info msg="CreateContainer within sandbox \"94a6638ce64a25918a934d8784c2558d3416d2eaf983e0f3e783e52e05d3ca4d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e\"" Jan 14 01:01:18.687180 containerd[1673]: time="2026-01-14T01:01:18.687143692Z" level=info msg="StartContainer for \"5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e\"" Jan 14 01:01:18.687977 containerd[1673]: time="2026-01-14T01:01:18.687952215Z" level=info msg="connecting to shim 5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e" address="unix:///run/containerd/s/e804f86e1dc85569a6ead155ce6afcbf03d2054d3de94b499305e6a33ff1325c" protocol=ttrpc version=3 Jan 14 01:01:18.709121 systemd[1]: Started cri-containerd-5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e.scope - libcontainer container 5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e. Jan 14 01:01:18.718000 audit: BPF prog-id=231 op=LOAD Jan 14 01:01:18.718000 audit: BPF prog-id=232 op=LOAD Jan 14 01:01:18.718000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.718000 audit: BPF prog-id=232 op=UNLOAD Jan 14 01:01:18.718000 audit[5187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.718000 audit: BPF prog-id=233 op=LOAD Jan 14 01:01:18.718000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.718000 audit: BPF prog-id=234 op=LOAD Jan 14 01:01:18.718000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.718000 audit: BPF prog-id=234 op=UNLOAD Jan 14 01:01:18.718000 audit[5187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.718000 audit: BPF prog-id=233 op=UNLOAD Jan 14 01:01:18.718000 audit[5187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.718000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.719000 audit: BPF prog-id=235 op=LOAD Jan 14 01:01:18.719000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5149 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:18.719000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566303537646566663339663936323938626338613834303963306233 Jan 14 01:01:18.735435 containerd[1673]: time="2026-01-14T01:01:18.735302480Z" level=info msg="StartContainer for \"5f057deff39f96298bc8a8409c0b3a6f6dc3442765a7de77eed18f2be2f27e6e\" returns successfully" Jan 14 01:01:18.826246 containerd[1673]: time="2026-01-14T01:01:18.825949357Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:18.827736 containerd[1673]: time="2026-01-14T01:01:18.827682083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:01:18.827895 containerd[1673]: time="2026-01-14T01:01:18.827860883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:18.828110 kubelet[3360]: E0114 01:01:18.828055 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:18.828185 kubelet[3360]: E0114 01:01:18.828123 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:18.828479 kubelet[3360]: E0114 01:01:18.828423 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:18.828902 containerd[1673]: time="2026-01-14T01:01:18.828803246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:01:18.830153 kubelet[3360]: E0114 01:01:18.830123 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:01:19.178969 containerd[1673]: time="2026-01-14T01:01:19.178909999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:19.180802 containerd[1673]: time="2026-01-14T01:01:19.180761604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:01:19.180958 containerd[1673]: time="2026-01-14T01:01:19.180847845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:19.181048 kubelet[3360]: E0114 01:01:19.181003 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:01:19.181111 kubelet[3360]: E0114 01:01:19.181056 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:01:19.181262 kubelet[3360]: E0114 01:01:19.181216 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98jsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5456h_calico-system(3fa9e714-815e-4dce-822e-1a57156e24fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:19.183318 kubelet[3360]: E0114 01:01:19.183283 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:01:19.218845 containerd[1673]: time="2026-01-14T01:01:19.218801281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4cd96c-cs27s,Uid:9d6eeb8e-19e2-43b0-aa84-76d3b0583005,Namespace:calico-system,Attempt:0,}" Jan 14 01:01:19.316319 systemd-networkd[1584]: cali87507382b0d: Link UP Jan 14 01:01:19.316476 systemd-networkd[1584]: cali87507382b0d: Gained carrier Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.255 [INFO][5223] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0 calico-kube-controllers-645b4cd96c- calico-system 9d6eeb8e-19e2-43b0-aa84-76d3b0583005 854 0 2026-01-14 01:00:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:645b4cd96c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 calico-kube-controllers-645b4cd96c-cs27s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali87507382b0d [] [] }} ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.255 [INFO][5223] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.277 [INFO][5237] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" HandleID="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.277 [INFO][5237] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" HandleID="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000510b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"calico-kube-controllers-645b4cd96c-cs27s", "timestamp":"2026-01-14 01:01:19.277430501 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.277 [INFO][5237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.277 [INFO][5237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.277 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.288 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.292 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.295 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.297 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.299 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.299 [INFO][5237] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.301 [INFO][5237] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.305 [INFO][5237] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.312 [INFO][5237] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.6/26] block=192.168.9.0/26 handle="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.312 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.6/26] handle="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.312 [INFO][5237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:19.328970 containerd[1673]: 2026-01-14 01:01:19.312 [INFO][5237] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.6/26] IPv6=[] ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" HandleID="k8s-pod-network.03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.330267 containerd[1673]: 2026-01-14 01:01:19.314 [INFO][5223] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0", GenerateName:"calico-kube-controllers-645b4cd96c-", Namespace:"calico-system", SelfLink:"", UID:"9d6eeb8e-19e2-43b0-aa84-76d3b0583005", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645b4cd96c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"calico-kube-controllers-645b4cd96c-cs27s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali87507382b0d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:19.330267 containerd[1673]: 2026-01-14 01:01:19.314 [INFO][5223] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.6/32] ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.330267 containerd[1673]: 2026-01-14 01:01:19.314 [INFO][5223] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87507382b0d ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.330267 containerd[1673]: 2026-01-14 01:01:19.316 [INFO][5223] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.330267 containerd[1673]: 2026-01-14 01:01:19.317 [INFO][5223] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0", GenerateName:"calico-kube-controllers-645b4cd96c-", Namespace:"calico-system", SelfLink:"", UID:"9d6eeb8e-19e2-43b0-aa84-76d3b0583005", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"645b4cd96c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e", Pod:"calico-kube-controllers-645b4cd96c-cs27s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.9.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali87507382b0d", MAC:"6e:7f:99:c3:3a:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:19.330267 containerd[1673]: 2026-01-14 01:01:19.326 [INFO][5223] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" Namespace="calico-system" Pod="calico-kube-controllers-645b4cd96c-cs27s" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--kube--controllers--645b4cd96c--cs27s-eth0" Jan 14 01:01:19.342000 audit[5254]: NETFILTER_CFG table=filter:131 family=2 entries=40 op=nft_register_chain pid=5254 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:19.342000 audit[5254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20784 a0=3 a1=fffff9e99800 a2=0 a3=ffff7f4dcfa8 items=0 ppid=4604 pid=5254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.342000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:19.352784 containerd[1673]: time="2026-01-14T01:01:19.352251210Z" level=info msg="connecting to shim 03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e" address="unix:///run/containerd/s/e3a62dfebd5154b7c73f281f4c28d05ad743c6bcc94c92f32643774623d9cbb6" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:19.367058 kubelet[3360]: E0114 01:01:19.367017 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:01:19.368397 kubelet[3360]: E0114 01:01:19.368365 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:01:19.377894 kubelet[3360]: I0114 01:01:19.377572 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cn9rq" podStartSLOduration=43.376665445 podStartE2EDuration="43.376665445s" podCreationTimestamp="2026-01-14 01:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:01:19.37516184 +0000 UTC m=+51.245291655" watchObservedRunningTime="2026-01-14 01:01:19.376665445 +0000 UTC m=+51.246795300" Jan 14 01:01:19.381017 systemd[1]: Started cri-containerd-03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e.scope - libcontainer container 03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e. Jan 14 01:01:19.394000 audit[5295]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=5295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:19.394000 audit[5295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffffa4c4740 a2=0 a3=1 items=0 ppid=3494 pid=5295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:19.400000 audit[5295]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=5295 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:19.400000 audit[5295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffffa4c4740 a2=0 a3=1 items=0 ppid=3494 pid=5295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.400000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:19.401000 audit: BPF prog-id=236 op=LOAD Jan 14 01:01:19.403000 audit: BPF prog-id=237 op=LOAD Jan 14 01:01:19.403000 audit[5275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.403000 audit: BPF prog-id=237 op=UNLOAD Jan 14 01:01:19.403000 audit[5275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.404000 audit: BPF prog-id=238 op=LOAD Jan 14 01:01:19.404000 audit[5275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.407000 audit: BPF prog-id=239 op=LOAD Jan 14 01:01:19.407000 audit[5275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.407000 audit: BPF prog-id=239 op=UNLOAD Jan 14 01:01:19.407000 audit[5275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.407000 audit: BPF prog-id=238 op=UNLOAD Jan 14 01:01:19.407000 audit[5275]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.407000 audit: BPF prog-id=240 op=LOAD Jan 14 01:01:19.407000 audit[5275]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5263 pid=5275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033353630663636396561386437353761653863316466376166313466 Jan 14 01:01:19.425000 audit[5297]: NETFILTER_CFG table=filter:134 family=2 entries=17 op=nft_register_rule pid=5297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:19.425000 audit[5297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc3127210 a2=0 a3=1 items=0 ppid=3494 pid=5297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.425000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:19.429000 audit[5297]: NETFILTER_CFG table=nat:135 family=2 entries=35 op=nft_register_chain pid=5297 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:19.429000 audit[5297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc3127210 a2=0 a3=1 items=0 ppid=3494 pid=5297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:19.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:19.445419 containerd[1673]: time="2026-01-14T01:01:19.445385255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-645b4cd96c-cs27s,Uid:9d6eeb8e-19e2-43b0-aa84-76d3b0583005,Namespace:calico-system,Attempt:0,} returns sandbox id \"03560f669ea8d757ae8c1df7af14f07a5da3533647921fb913bffb4915cb087e\"" Jan 14 01:01:19.447402 containerd[1673]: time="2026-01-14T01:01:19.447226181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:01:19.714151 systemd-networkd[1584]: cali62105c2d388: Gained IPv6LL Jan 14 01:01:19.763963 containerd[1673]: time="2026-01-14T01:01:19.763726231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:19.766343 containerd[1673]: time="2026-01-14T01:01:19.766196758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:01:19.766343 containerd[1673]: time="2026-01-14T01:01:19.766215278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:19.766585 kubelet[3360]: E0114 01:01:19.766451 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:01:19.766585 kubelet[3360]: E0114 01:01:19.766501 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:01:19.766835 kubelet[3360]: E0114 01:01:19.766649 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh8rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:19.768280 kubelet[3360]: E0114 01:01:19.767990 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:01:20.034893 systemd-networkd[1584]: cali89b047d9f76: Gained IPv6LL Jan 14 01:01:20.354142 systemd-networkd[1584]: cali601ebfe7567: Gained IPv6LL Jan 14 01:01:20.374716 kubelet[3360]: E0114 01:01:20.373967 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:01:20.374716 kubelet[3360]: E0114 01:01:20.373972 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:01:20.374716 kubelet[3360]: E0114 01:01:20.374045 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:01:20.447000 audit[5305]: NETFILTER_CFG table=filter:136 family=2 entries=14 op=nft_register_rule pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:20.448950 kernel: kauditd_printk_skb: 375 callbacks suppressed Jan 14 01:01:20.449007 kernel: audit: type=1325 audit(1768352480.447:713): table=filter:136 family=2 entries=14 op=nft_register_rule pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:20.447000 audit[5305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcbc51c60 a2=0 a3=1 items=0 ppid=3494 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:20.454045 kernel: audit: type=1300 audit(1768352480.447:713): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcbc51c60 a2=0 a3=1 items=0 ppid=3494 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:20.447000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:20.455931 kernel: audit: type=1327 audit(1768352480.447:713): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:20.456000 audit[5305]: NETFILTER_CFG table=nat:137 family=2 entries=20 op=nft_register_rule pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:20.456000 audit[5305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcbc51c60 a2=0 a3=1 items=0 ppid=3494 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:20.463583 kernel: audit: type=1325 audit(1768352480.456:714): table=nat:137 family=2 entries=20 op=nft_register_rule pid=5305 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:20.463643 kernel: audit: type=1300 audit(1768352480.456:714): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcbc51c60 a2=0 a3=1 items=0 ppid=3494 pid=5305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:20.456000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:20.465456 kernel: audit: type=1327 audit(1768352480.456:714): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:20.673887 systemd-networkd[1584]: cali87507382b0d: Gained IPv6LL Jan 14 01:01:21.218279 containerd[1673]: time="2026-01-14T01:01:21.217907726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-6x9dt,Uid:19f24365-565a-4989-ac4f-5964349e35e5,Namespace:calico-apiserver,Attempt:0,}" Jan 14 01:01:21.218740 containerd[1673]: time="2026-01-14T01:01:21.218334488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtnc2,Uid:3c6f06cc-5927-444c-9eab-65c1de1f4fb8,Namespace:kube-system,Attempt:0,}" Jan 14 01:01:21.332293 systemd-networkd[1584]: cali0f678df928b: Link UP Jan 14 01:01:21.332901 systemd-networkd[1584]: cali0f678df928b: Gained carrier Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.262 [INFO][5312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0 calico-apiserver-9bc96f9cf- calico-apiserver 19f24365-565a-4989-ac4f-5964349e35e5 858 0 2026-01-14 01:00:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9bc96f9cf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 calico-apiserver-9bc96f9cf-6x9dt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f678df928b [] [] }} ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.262 [INFO][5312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5341] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" HandleID="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5341] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" HandleID="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3080), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"calico-apiserver-9bc96f9cf-6x9dt", "timestamp":"2026-01-14 01:01:21.286229176 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5341] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.296 [INFO][5341] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.304 [INFO][5341] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.308 [INFO][5341] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.311 [INFO][5341] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.313 [INFO][5341] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.313 [INFO][5341] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.314 [INFO][5341] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4 Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.320 [INFO][5341] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.327 [INFO][5341] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.7/26] block=192.168.9.0/26 handle="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.327 [INFO][5341] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.7/26] handle="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.327 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:21.350645 containerd[1673]: 2026-01-14 01:01:21.327 [INFO][5341] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.7/26] IPv6=[] ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" HandleID="k8s-pod-network.3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Workload="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.351389 containerd[1673]: 2026-01-14 01:01:21.329 [INFO][5312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0", GenerateName:"calico-apiserver-9bc96f9cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"19f24365-565a-4989-ac4f-5964349e35e5", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bc96f9cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"calico-apiserver-9bc96f9cf-6x9dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f678df928b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:21.351389 containerd[1673]: 2026-01-14 01:01:21.329 [INFO][5312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.7/32] ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.351389 containerd[1673]: 2026-01-14 01:01:21.329 [INFO][5312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f678df928b ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.351389 containerd[1673]: 2026-01-14 01:01:21.333 [INFO][5312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.351389 containerd[1673]: 2026-01-14 01:01:21.333 [INFO][5312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0", GenerateName:"calico-apiserver-9bc96f9cf-", Namespace:"calico-apiserver", SelfLink:"", UID:"19f24365-565a-4989-ac4f-5964349e35e5", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9bc96f9cf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4", Pod:"calico-apiserver-9bc96f9cf-6x9dt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.9.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f678df928b", MAC:"ee:86:63:9c:09:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:21.351389 containerd[1673]: 2026-01-14 01:01:21.348 [INFO][5312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" Namespace="calico-apiserver" Pod="calico-apiserver-9bc96f9cf-6x9dt" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-calico--apiserver--9bc96f9cf--6x9dt-eth0" Jan 14 01:01:21.360000 audit[5367]: NETFILTER_CFG table=filter:138 family=2 entries=45 op=nft_register_chain pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:21.360000 audit[5367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24232 a0=3 a1=ffffda9913f0 a2=0 a3=ffff81e12fa8 items=0 ppid=4604 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.366713 kernel: audit: type=1325 audit(1768352481.360:715): table=filter:138 family=2 entries=45 op=nft_register_chain pid=5367 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:21.366905 kernel: audit: type=1300 audit(1768352481.360:715): arch=c00000b7 syscall=211 success=yes exit=24232 a0=3 a1=ffffda9913f0 a2=0 a3=ffff81e12fa8 items=0 ppid=4604 pid=5367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.367066 kernel: audit: type=1327 audit(1768352481.360:715): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:21.360000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:21.377745 kubelet[3360]: E0114 01:01:21.377516 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:01:21.386751 containerd[1673]: time="2026-01-14T01:01:21.386201442Z" level=info msg="connecting to shim 3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4" address="unix:///run/containerd/s/8907b02fb2bbc385cc72d07a7c91d5e551c8d7b5d7ec3cc5025ef324457e0459" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:21.415875 systemd[1]: Started cri-containerd-3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4.scope - libcontainer container 3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4. Jan 14 01:01:21.427000 audit: BPF prog-id=241 op=LOAD Jan 14 01:01:21.428711 kernel: audit: type=1334 audit(1768352481.427:716): prog-id=241 op=LOAD Jan 14 01:01:21.427000 audit: BPF prog-id=242 op=LOAD Jan 14 01:01:21.427000 audit[5388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.427000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.428000 audit: BPF prog-id=242 op=UNLOAD Jan 14 01:01:21.428000 audit[5388]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.428000 audit: BPF prog-id=243 op=LOAD Jan 14 01:01:21.428000 audit[5388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.428000 audit: BPF prog-id=244 op=LOAD Jan 14 01:01:21.428000 audit[5388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.428000 audit: BPF prog-id=244 op=UNLOAD Jan 14 01:01:21.428000 audit[5388]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.428000 audit: BPF prog-id=243 op=UNLOAD Jan 14 01:01:21.428000 audit[5388]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.429000 audit: BPF prog-id=245 op=LOAD Jan 14 01:01:21.429000 audit[5388]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5376 pid=5388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.429000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339353466346462633234383839386133656439353961333361643133 Jan 14 01:01:21.437593 systemd-networkd[1584]: cali3f1bfe582dd: Link UP Jan 14 01:01:21.438362 systemd-networkd[1584]: cali3f1bfe582dd: Gained carrier Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.263 [INFO][5323] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0 coredns-674b8bbfcf- kube-system 3c6f06cc-5927-444c-9eab-65c1de1f4fb8 855 0 2026-01-14 01:00:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-a666ba3d92 coredns-674b8bbfcf-rtnc2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f1bfe582dd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.263 [INFO][5323] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5343] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" HandleID="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Workload="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5343] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" HandleID="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Workload="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c540), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-a666ba3d92", "pod":"coredns-674b8bbfcf-rtnc2", "timestamp":"2026-01-14 01:01:21.286596737 +0000 UTC"}, Hostname:"ci-4547-0-0-n-a666ba3d92", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.286 [INFO][5343] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.327 [INFO][5343] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.328 [INFO][5343] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-a666ba3d92' Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.397 [INFO][5343] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.403 [INFO][5343] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.410 [INFO][5343] ipam/ipam.go 511: Trying affinity for 192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.412 [INFO][5343] ipam/ipam.go 158: Attempting to load block cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.415 [INFO][5343] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.9.0/26 host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.415 [INFO][5343] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.9.0/26 handle="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.419 [INFO][5343] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850 Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.423 [INFO][5343] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.9.0/26 handle="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.430 [INFO][5343] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.9.8/26] block=192.168.9.0/26 handle="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.430 [INFO][5343] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.9.8/26] handle="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" host="ci-4547-0-0-n-a666ba3d92" Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.430 [INFO][5343] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 01:01:21.452888 containerd[1673]: 2026-01-14 01:01:21.430 [INFO][5343] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.9.8/26] IPv6=[] ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" HandleID="k8s-pod-network.9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Workload="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.453403 containerd[1673]: 2026-01-14 01:01:21.433 [INFO][5323] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3c6f06cc-5927-444c-9eab-65c1de1f4fb8", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"", Pod:"coredns-674b8bbfcf-rtnc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f1bfe582dd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:21.453403 containerd[1673]: 2026-01-14 01:01:21.433 [INFO][5323] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.9.8/32] ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.453403 containerd[1673]: 2026-01-14 01:01:21.433 [INFO][5323] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f1bfe582dd ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.453403 containerd[1673]: 2026-01-14 01:01:21.438 [INFO][5323] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.453403 containerd[1673]: 2026-01-14 01:01:21.438 [INFO][5323] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3c6f06cc-5927-444c-9eab-65c1de1f4fb8", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 1, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-a666ba3d92", ContainerID:"9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850", Pod:"coredns-674b8bbfcf-rtnc2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.9.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f1bfe582dd", MAC:"ba:99:52:3f:61:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 01:01:21.453403 containerd[1673]: 2026-01-14 01:01:21.450 [INFO][5323] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" Namespace="kube-system" Pod="coredns-674b8bbfcf-rtnc2" WorkloadEndpoint="ci--4547--0--0--n--a666ba3d92-k8s-coredns--674b8bbfcf--rtnc2-eth0" Jan 14 01:01:21.466902 containerd[1673]: time="2026-01-14T01:01:21.466844889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9bc96f9cf-6x9dt,Uid:19f24365-565a-4989-ac4f-5964349e35e5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3954f4dbc248898a3ed959a33ad13f497c667e99f3dce82cff82015c1c0cdaf4\"" Jan 14 01:01:21.468991 containerd[1673]: time="2026-01-14T01:01:21.468801015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:01:21.472000 audit[5425]: NETFILTER_CFG table=filter:139 family=2 entries=44 op=nft_register_chain pid=5425 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 01:01:21.472000 audit[5425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21500 a0=3 a1=ffffe9be6480 a2=0 a3=ffff7fe47fa8 items=0 ppid=4604 pid=5425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.472000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 01:01:21.481380 containerd[1673]: time="2026-01-14T01:01:21.481341133Z" level=info msg="connecting to shim 9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850" address="unix:///run/containerd/s/cccc280cd231f8ab063e47844b5c4b74da8eb8704ed1f8385415d7a75fbb1e8a" namespace=k8s.io protocol=ttrpc version=3 Jan 14 01:01:21.508113 systemd[1]: Started cri-containerd-9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850.scope - libcontainer container 9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850. Jan 14 01:01:21.516000 audit: BPF prog-id=246 op=LOAD Jan 14 01:01:21.517000 audit: BPF prog-id=247 op=LOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.517000 audit: BPF prog-id=247 op=UNLOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.517000 audit: BPF prog-id=248 op=LOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.517000 audit: BPF prog-id=249 op=LOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.517000 audit: BPF prog-id=249 op=UNLOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.517000 audit: BPF prog-id=248 op=UNLOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.517000 audit: BPF prog-id=250 op=LOAD Jan 14 01:01:21.517000 audit[5446]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=5435 pid=5446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962343761313636386364643431326337323339623134313534656335 Jan 14 01:01:21.542152 containerd[1673]: time="2026-01-14T01:01:21.542109480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rtnc2,Uid:3c6f06cc-5927-444c-9eab-65c1de1f4fb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850\"" Jan 14 01:01:21.549472 containerd[1673]: time="2026-01-14T01:01:21.548970941Z" level=info msg="CreateContainer within sandbox \"9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 01:01:21.558197 containerd[1673]: time="2026-01-14T01:01:21.558163289Z" level=info msg="Container 4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:01:21.564462 containerd[1673]: time="2026-01-14T01:01:21.564426548Z" level=info msg="CreateContainer within sandbox \"9b47a1668cdd412c7239b14154ec509a4a1000b7034af7e33c07ab3b0e9a4850\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348\"" Jan 14 01:01:21.565096 containerd[1673]: time="2026-01-14T01:01:21.565070510Z" level=info msg="StartContainer for \"4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348\"" Jan 14 01:01:21.566002 containerd[1673]: time="2026-01-14T01:01:21.565955793Z" level=info msg="connecting to shim 4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348" address="unix:///run/containerd/s/cccc280cd231f8ab063e47844b5c4b74da8eb8704ed1f8385415d7a75fbb1e8a" protocol=ttrpc version=3 Jan 14 01:01:21.590945 systemd[1]: Started cri-containerd-4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348.scope - libcontainer container 4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348. Jan 14 01:01:21.600000 audit: BPF prog-id=251 op=LOAD Jan 14 01:01:21.601000 audit: BPF prog-id=252 op=LOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.601000 audit: BPF prog-id=252 op=UNLOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.601000 audit: BPF prog-id=253 op=LOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.601000 audit: BPF prog-id=254 op=LOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.601000 audit: BPF prog-id=254 op=UNLOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.601000 audit: BPF prog-id=253 op=UNLOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.601000 audit: BPF prog-id=255 op=LOAD Jan 14 01:01:21.601000 audit[5473]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=5435 pid=5473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:21.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462396639623032623736353366383033643532636463383630366462 Jan 14 01:01:21.617405 containerd[1673]: time="2026-01-14T01:01:21.617366030Z" level=info msg="StartContainer for \"4b9f9b02b7653f803d52cdc8606db6f8272845ff4b64cbc1d3faf29b03ecf348\" returns successfully" Jan 14 01:01:21.817323 containerd[1673]: time="2026-01-14T01:01:21.817262243Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:21.818797 containerd[1673]: time="2026-01-14T01:01:21.818754807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:01:21.819044 containerd[1673]: time="2026-01-14T01:01:21.818801447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:21.819239 kubelet[3360]: E0114 01:01:21.819205 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:21.819278 kubelet[3360]: E0114 01:01:21.819253 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:21.819418 kubelet[3360]: E0114 01:01:21.819376 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdv76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-6x9dt_calico-apiserver(19f24365-565a-4989-ac4f-5964349e35e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:21.820726 kubelet[3360]: E0114 01:01:21.820647 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:01:22.380148 kubelet[3360]: E0114 01:01:22.380047 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:01:22.413131 kubelet[3360]: I0114 01:01:22.412769 3360 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rtnc2" podStartSLOduration=46.412749787 podStartE2EDuration="46.412749787s" podCreationTimestamp="2026-01-14 01:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 01:01:22.391608442 +0000 UTC m=+54.261738257" watchObservedRunningTime="2026-01-14 01:01:22.412749787 +0000 UTC m=+54.282879602" Jan 14 01:01:22.453000 audit[5509]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:22.453000 audit[5509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe4946ca0 a2=0 a3=1 items=0 ppid=3494 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:22.453000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:22.460000 audit[5509]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:22.460000 audit[5509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe4946ca0 a2=0 a3=1 items=0 ppid=3494 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:22.460000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:22.484000 audit[5511]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:22.484000 audit[5511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5d91d50 a2=0 a3=1 items=0 ppid=3494 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:22.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:22.497000 audit[5511]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=5511 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:01:22.497000 audit[5511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff5d91d50 a2=0 a3=1 items=0 ppid=3494 pid=5511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:01:22.497000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:01:22.529878 systemd-networkd[1584]: cali3f1bfe582dd: Gained IPv6LL Jan 14 01:01:22.978191 systemd-networkd[1584]: cali0f678df928b: Gained IPv6LL Jan 14 01:01:23.381312 kubelet[3360]: E0114 01:01:23.381173 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:01:28.221128 containerd[1673]: time="2026-01-14T01:01:28.220861743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:01:28.551523 containerd[1673]: time="2026-01-14T01:01:28.551479996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:28.553169 containerd[1673]: time="2026-01-14T01:01:28.553133961Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:01:28.553274 containerd[1673]: time="2026-01-14T01:01:28.553195401Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:28.553456 kubelet[3360]: E0114 01:01:28.553412 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:01:28.553760 kubelet[3360]: E0114 01:01:28.553464 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:01:28.553760 kubelet[3360]: E0114 01:01:28.553578 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2290a9e2f64b4bbd8cad9f1beca215c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:28.556354 containerd[1673]: time="2026-01-14T01:01:28.556244731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:01:28.891733 containerd[1673]: time="2026-01-14T01:01:28.891568998Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:28.893008 containerd[1673]: time="2026-01-14T01:01:28.892968362Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:01:28.893086 containerd[1673]: time="2026-01-14T01:01:28.893054443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:28.893221 kubelet[3360]: E0114 01:01:28.893174 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:01:28.893305 kubelet[3360]: E0114 01:01:28.893227 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:01:28.893709 kubelet[3360]: E0114 01:01:28.893349 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:28.894612 kubelet[3360]: E0114 01:01:28.894573 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:01:31.219296 containerd[1673]: time="2026-01-14T01:01:31.219259130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:01:31.549931 containerd[1673]: time="2026-01-14T01:01:31.549842543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:31.552208 containerd[1673]: time="2026-01-14T01:01:31.552116150Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:01:31.552273 containerd[1673]: time="2026-01-14T01:01:31.552241390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:31.552452 kubelet[3360]: E0114 01:01:31.552397 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:31.552911 kubelet[3360]: E0114 01:01:31.552452 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:31.552911 kubelet[3360]: E0114 01:01:31.552596 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:31.553954 kubelet[3360]: E0114 01:01:31.553921 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:01:33.219543 containerd[1673]: time="2026-01-14T01:01:33.219507459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:01:33.541928 containerd[1673]: time="2026-01-14T01:01:33.541754686Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:33.543940 containerd[1673]: time="2026-01-14T01:01:33.543749212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:01:33.543940 containerd[1673]: time="2026-01-14T01:01:33.543810652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:33.544040 kubelet[3360]: E0114 01:01:33.543984 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:01:33.544040 kubelet[3360]: E0114 01:01:33.544027 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:01:33.544297 kubelet[3360]: E0114 01:01:33.544143 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh8rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:33.545388 kubelet[3360]: E0114 01:01:33.545355 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:01:34.219799 containerd[1673]: time="2026-01-14T01:01:34.219760084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:01:34.560319 containerd[1673]: time="2026-01-14T01:01:34.560265207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:34.562377 containerd[1673]: time="2026-01-14T01:01:34.562197013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:01:34.562377 containerd[1673]: time="2026-01-14T01:01:34.562258453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:34.562529 kubelet[3360]: E0114 01:01:34.562459 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:01:34.562529 kubelet[3360]: E0114 01:01:34.562504 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:01:34.562850 kubelet[3360]: E0114 01:01:34.562796 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:34.562927 containerd[1673]: time="2026-01-14T01:01:34.562799575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:01:34.911681 containerd[1673]: time="2026-01-14T01:01:34.911490923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:34.913511 containerd[1673]: time="2026-01-14T01:01:34.913449169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:34.913578 containerd[1673]: time="2026-01-14T01:01:34.913467729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:01:34.913826 kubelet[3360]: E0114 01:01:34.913777 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:01:34.913826 kubelet[3360]: E0114 01:01:34.913827 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:01:34.914334 kubelet[3360]: E0114 01:01:34.914158 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98jsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5456h_calico-system(3fa9e714-815e-4dce-822e-1a57156e24fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:34.914731 containerd[1673]: time="2026-01-14T01:01:34.914507212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:01:34.915493 kubelet[3360]: E0114 01:01:34.915461 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:01:35.227039 containerd[1673]: time="2026-01-14T01:01:35.226890929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:35.228579 containerd[1673]: time="2026-01-14T01:01:35.228528254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:01:35.228673 containerd[1673]: time="2026-01-14T01:01:35.228619575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:35.228875 kubelet[3360]: E0114 01:01:35.228831 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:01:35.228924 kubelet[3360]: E0114 01:01:35.228888 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:01:35.229046 kubelet[3360]: E0114 01:01:35.229009 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:35.230225 kubelet[3360]: E0114 01:01:35.230183 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:38.219236 containerd[1673]: time="2026-01-14T01:01:38.219192138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:01:38.547743 containerd[1673]: time="2026-01-14T01:01:38.547393383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:38.548925 containerd[1673]: time="2026-01-14T01:01:38.548834268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:01:38.548925 containerd[1673]: time="2026-01-14T01:01:38.548862388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:38.549104 kubelet[3360]: E0114 01:01:38.549037 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:38.549365 kubelet[3360]: E0114 01:01:38.549111 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:38.549365 kubelet[3360]: E0114 01:01:38.549248 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdv76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-6x9dt_calico-apiserver(19f24365-565a-4989-ac4f-5964349e35e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:38.550425 kubelet[3360]: E0114 01:01:38.550396 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:01:40.221511 kubelet[3360]: E0114 01:01:40.221464 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:01:45.219199 kubelet[3360]: E0114 01:01:45.218884 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:01:47.218522 kubelet[3360]: E0114 01:01:47.218263 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:01:49.219327 kubelet[3360]: E0114 01:01:49.219243 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:01:49.220976 kubelet[3360]: E0114 01:01:49.220928 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:01:50.219654 kubelet[3360]: E0114 01:01:50.219384 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:01:55.218996 containerd[1673]: time="2026-01-14T01:01:55.218954865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:01:55.578531 containerd[1673]: time="2026-01-14T01:01:55.578483926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:55.580089 containerd[1673]: time="2026-01-14T01:01:55.580016691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:01:55.580154 containerd[1673]: time="2026-01-14T01:01:55.580107171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:55.580282 kubelet[3360]: E0114 01:01:55.580244 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:01:55.581023 kubelet[3360]: E0114 01:01:55.580294 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:01:55.581023 kubelet[3360]: E0114 01:01:55.580408 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2290a9e2f64b4bbd8cad9f1beca215c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:55.583292 containerd[1673]: time="2026-01-14T01:01:55.583207341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:01:55.914542 containerd[1673]: time="2026-01-14T01:01:55.914427075Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:55.916199 containerd[1673]: time="2026-01-14T01:01:55.916142641Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:01:55.916288 containerd[1673]: time="2026-01-14T01:01:55.916170681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:55.916421 kubelet[3360]: E0114 01:01:55.916384 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:01:55.916500 kubelet[3360]: E0114 01:01:55.916433 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:01:55.916597 kubelet[3360]: E0114 01:01:55.916551 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:55.917796 kubelet[3360]: E0114 01:01:55.917756 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:01:56.220422 containerd[1673]: time="2026-01-14T01:01:56.220312733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:01:56.557418 containerd[1673]: time="2026-01-14T01:01:56.557357925Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:01:56.559925 containerd[1673]: time="2026-01-14T01:01:56.559823133Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:01:56.559925 containerd[1673]: time="2026-01-14T01:01:56.559866573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:01:56.560144 kubelet[3360]: E0114 01:01:56.560107 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:56.560193 kubelet[3360]: E0114 01:01:56.560157 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:01:56.560371 kubelet[3360]: E0114 01:01:56.560328 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:01:56.561525 kubelet[3360]: E0114 01:01:56.561471 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:02:01.219509 containerd[1673]: time="2026-01-14T01:02:01.219469050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:02:01.560180 containerd[1673]: time="2026-01-14T01:02:01.559959453Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:01.561629 containerd[1673]: time="2026-01-14T01:02:01.561585058Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:02:01.561780 containerd[1673]: time="2026-01-14T01:02:01.561737219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:01.561987 kubelet[3360]: E0114 01:02:01.561923 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:02:01.561987 kubelet[3360]: E0114 01:02:01.561977 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:02:01.562350 kubelet[3360]: E0114 01:02:01.562216 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98jsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5456h_calico-system(3fa9e714-815e-4dce-822e-1a57156e24fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:01.563761 containerd[1673]: time="2026-01-14T01:02:01.563679465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:02:01.563974 kubelet[3360]: E0114 01:02:01.563873 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:02:01.887410 containerd[1673]: time="2026-01-14T01:02:01.886901255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:01.888561 containerd[1673]: time="2026-01-14T01:02:01.888460260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:02:01.888650 containerd[1673]: time="2026-01-14T01:02:01.888526940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:01.888801 kubelet[3360]: E0114 01:02:01.888730 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:02:01.888801 kubelet[3360]: E0114 01:02:01.888780 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:02:01.888929 kubelet[3360]: E0114 01:02:01.888890 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:01.891277 containerd[1673]: time="2026-01-14T01:02:01.891245188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:02:02.219255 containerd[1673]: time="2026-01-14T01:02:02.218813912Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:02.221163 containerd[1673]: time="2026-01-14T01:02:02.221058079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:02:02.221503 containerd[1673]: time="2026-01-14T01:02:02.221146319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:02.222275 kubelet[3360]: E0114 01:02:02.221814 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:02:02.222275 kubelet[3360]: E0114 01:02:02.221854 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:02:02.222275 kubelet[3360]: E0114 01:02:02.221992 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:02.222460 containerd[1673]: time="2026-01-14T01:02:02.222218002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:02:02.223614 kubelet[3360]: E0114 01:02:02.223518 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:02:02.542235 containerd[1673]: time="2026-01-14T01:02:02.542185823Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:02.544953 containerd[1673]: time="2026-01-14T01:02:02.544911911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:02:02.545043 containerd[1673]: time="2026-01-14T01:02:02.544980751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:02.546838 containerd[1673]: time="2026-01-14T01:02:02.545401793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:02:02.546904 kubelet[3360]: E0114 01:02:02.545104 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:02:02.546904 kubelet[3360]: E0114 01:02:02.545157 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:02:02.546904 kubelet[3360]: E0114 01:02:02.545400 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdv76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-6x9dt_calico-apiserver(19f24365-565a-4989-ac4f-5964349e35e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:02.546904 kubelet[3360]: E0114 01:02:02.546510 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:02:02.877790 containerd[1673]: time="2026-01-14T01:02:02.876571447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:02.879498 containerd[1673]: time="2026-01-14T01:02:02.879388456Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:02:02.879498 containerd[1673]: time="2026-01-14T01:02:02.879459696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:02.879707 kubelet[3360]: E0114 01:02:02.879626 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:02:02.879979 kubelet[3360]: E0114 01:02:02.879711 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:02:02.880817 kubelet[3360]: E0114 01:02:02.880751 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh8rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:02.882006 kubelet[3360]: E0114 01:02:02.881957 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:02:10.219997 kubelet[3360]: E0114 01:02:10.219826 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:02:11.219913 kubelet[3360]: E0114 01:02:11.219761 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:02:15.219628 kubelet[3360]: E0114 01:02:15.219575 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:02:15.219628 kubelet[3360]: E0114 01:02:15.219603 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:02:17.219123 kubelet[3360]: E0114 01:02:17.218816 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:02:17.221952 kubelet[3360]: E0114 01:02:17.221899 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:02:23.221161 kubelet[3360]: E0114 01:02:23.221107 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:02:24.219135 kubelet[3360]: E0114 01:02:24.219080 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:02:28.225261 kubelet[3360]: E0114 01:02:28.224847 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:02:30.219898 kubelet[3360]: E0114 01:02:30.219827 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:02:30.220786 kubelet[3360]: E0114 01:02:30.220674 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:02:32.221897 kubelet[3360]: E0114 01:02:32.221836 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:02:36.218835 containerd[1673]: time="2026-01-14T01:02:36.218787646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:02:36.757007 containerd[1673]: time="2026-01-14T01:02:36.756717174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:36.758388 containerd[1673]: time="2026-01-14T01:02:36.758352019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:02:36.758580 containerd[1673]: time="2026-01-14T01:02:36.758395980Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:36.758758 kubelet[3360]: E0114 01:02:36.758716 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:02:36.759054 kubelet[3360]: E0114 01:02:36.758766 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:02:36.759054 kubelet[3360]: E0114 01:02:36.758875 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2290a9e2f64b4bbd8cad9f1beca215c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:36.761392 containerd[1673]: time="2026-01-14T01:02:36.761133628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:02:37.109323 containerd[1673]: time="2026-01-14T01:02:37.109096534Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:37.110781 containerd[1673]: time="2026-01-14T01:02:37.110731299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:02:37.110850 containerd[1673]: time="2026-01-14T01:02:37.110786139Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:37.111031 kubelet[3360]: E0114 01:02:37.110970 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:02:37.111031 kubelet[3360]: E0114 01:02:37.111027 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:02:37.111225 kubelet[3360]: E0114 01:02:37.111144 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:37.112417 kubelet[3360]: E0114 01:02:37.112366 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:02:39.219740 kubelet[3360]: E0114 01:02:39.219679 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:02:39.220108 containerd[1673]: time="2026-01-14T01:02:39.219928162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:02:39.554625 containerd[1673]: time="2026-01-14T01:02:39.554520987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:39.556112 containerd[1673]: time="2026-01-14T01:02:39.556014951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:02:39.556112 containerd[1673]: time="2026-01-14T01:02:39.556057512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:39.556272 kubelet[3360]: E0114 01:02:39.556207 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:02:39.556343 kubelet[3360]: E0114 01:02:39.556278 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:02:39.556778 kubelet[3360]: E0114 01:02:39.556435 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:39.557746 kubelet[3360]: E0114 01:02:39.557599 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:02:41.218394 kubelet[3360]: E0114 01:02:41.218346 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:02:45.219309 containerd[1673]: time="2026-01-14T01:02:45.219130903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:02:45.550449 containerd[1673]: time="2026-01-14T01:02:45.550363958Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:45.552096 containerd[1673]: time="2026-01-14T01:02:45.552006163Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:02:45.552096 containerd[1673]: time="2026-01-14T01:02:45.552040963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:45.552374 kubelet[3360]: E0114 01:02:45.552333 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:02:45.552857 kubelet[3360]: E0114 01:02:45.552384 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:02:45.552857 kubelet[3360]: E0114 01:02:45.552521 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh8rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:45.554047 kubelet[3360]: E0114 01:02:45.553996 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:02:46.219084 containerd[1673]: time="2026-01-14T01:02:46.218849326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:02:46.563642 containerd[1673]: time="2026-01-14T01:02:46.563592302Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:46.565782 containerd[1673]: time="2026-01-14T01:02:46.565657789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:02:46.566161 kubelet[3360]: E0114 01:02:46.565983 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:02:46.566161 kubelet[3360]: E0114 01:02:46.566027 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:02:46.566445 containerd[1673]: time="2026-01-14T01:02:46.565716189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:46.566477 kubelet[3360]: E0114 01:02:46.566257 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:46.569011 containerd[1673]: time="2026-01-14T01:02:46.568957239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:02:46.900773 containerd[1673]: time="2026-01-14T01:02:46.900585175Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:46.902405 containerd[1673]: time="2026-01-14T01:02:46.902360980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:02:46.902592 containerd[1673]: time="2026-01-14T01:02:46.902391300Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:46.902663 kubelet[3360]: E0114 01:02:46.902617 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:02:46.902724 kubelet[3360]: E0114 01:02:46.902674 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:02:46.903055 kubelet[3360]: E0114 01:02:46.902857 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:46.904189 kubelet[3360]: E0114 01:02:46.904159 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:02:49.219051 kubelet[3360]: E0114 01:02:49.219003 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:02:49.690973 systemd[1]: Started sshd@7-10.0.30.209:22-20.161.92.111:45154.service - OpenSSH per-connection server daemon (20.161.92.111:45154). Jan 14 01:02:49.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.30.209:22-20.161.92.111:45154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:02:49.691914 kernel: kauditd_printk_skb: 80 callbacks suppressed Jan 14 01:02:49.691969 kernel: audit: type=1130 audit(1768352569.690:745): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.30.209:22-20.161.92.111:45154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:02:50.216814 sshd[5664]: Accepted publickey for core from 20.161.92.111 port 45154 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:02:50.216000 audit[5664]: USER_ACCT pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.220724 kernel: audit: type=1101 audit(1768352570.216:746): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.220000 audit[5664]: CRED_ACQ pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.222122 containerd[1673]: time="2026-01-14T01:02:50.221878151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 01:02:50.223595 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:50.227301 kernel: audit: type=1103 audit(1768352570.220:747): pid=5664 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.227535 kernel: audit: type=1006 audit(1768352570.220:748): pid=5664 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 14 01:02:50.220000 audit[5664]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc57f2170 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:02:50.230214 systemd-logind[1648]: New session 9 of user core. Jan 14 01:02:50.231388 kernel: audit: type=1300 audit(1768352570.220:748): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc57f2170 a2=3 a3=0 items=0 ppid=1 pid=5664 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:02:50.231709 kernel: audit: type=1327 audit(1768352570.220:748): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:02:50.220000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:02:50.238014 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 01:02:50.240000 audit[5664]: USER_START pid=5664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.245721 kernel: audit: type=1105 audit(1768352570.240:749): pid=5664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.245000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.249722 kernel: audit: type=1103 audit(1768352570.245:750): pid=5668 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.564556 containerd[1673]: time="2026-01-14T01:02:50.564171800Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:50.565779 containerd[1673]: time="2026-01-14T01:02:50.565734565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 01:02:50.565853 containerd[1673]: time="2026-01-14T01:02:50.565818725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:50.566218 kubelet[3360]: E0114 01:02:50.566074 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:02:50.566218 kubelet[3360]: E0114 01:02:50.566149 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 01:02:50.566553 kubelet[3360]: E0114 01:02:50.566396 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-98jsb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5456h_calico-system(3fa9e714-815e-4dce-822e-1a57156e24fa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:50.568668 kubelet[3360]: E0114 01:02:50.568611 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:02:50.579715 sshd[5668]: Connection closed by 20.161.92.111 port 45154 Jan 14 01:02:50.580325 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:50.582000 audit[5664]: USER_END pid=5664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.582000 audit[5664]: CRED_DISP pid=5664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.589802 systemd-logind[1648]: Session 9 logged out. Waiting for processes to exit. Jan 14 01:02:50.590033 systemd[1]: sshd@7-10.0.30.209:22-20.161.92.111:45154.service: Deactivated successfully. Jan 14 01:02:50.590714 kernel: audit: type=1106 audit(1768352570.582:751): pid=5664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.590981 kernel: audit: type=1104 audit(1768352570.582:752): pid=5664 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:50.592619 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 01:02:50.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.30.209:22-20.161.92.111:45154 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:02:50.595766 systemd-logind[1648]: Removed session 9. Jan 14 01:02:53.219914 kubelet[3360]: E0114 01:02:53.219863 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:02:54.220799 containerd[1673]: time="2026-01-14T01:02:54.220510083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:02:54.553272 containerd[1673]: time="2026-01-14T01:02:54.553209622Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:02:54.554708 containerd[1673]: time="2026-01-14T01:02:54.554648627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:02:54.554790 containerd[1673]: time="2026-01-14T01:02:54.554742787Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:02:54.555202 kubelet[3360]: E0114 01:02:54.555087 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:02:54.556094 kubelet[3360]: E0114 01:02:54.555305 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:02:54.556094 kubelet[3360]: E0114 01:02:54.555927 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sdv76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-6x9dt_calico-apiserver(19f24365-565a-4989-ac4f-5964349e35e5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:02:54.557798 kubelet[3360]: E0114 01:02:54.557757 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:02:55.691004 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:02:55.691131 kernel: audit: type=1130 audit(1768352575.686:754): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.30.209:22-20.161.92.111:47232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:02:55.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.30.209:22-20.161.92.111:47232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:02:55.687547 systemd[1]: Started sshd@8-10.0.30.209:22-20.161.92.111:47232.service - OpenSSH per-connection server daemon (20.161.92.111:47232). Jan 14 01:02:56.217000 audit[5690]: USER_ACCT pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.218517 sshd[5690]: Accepted publickey for core from 20.161.92.111 port 47232 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:02:56.222704 kernel: audit: type=1101 audit(1768352576.217:755): pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.223000 audit[5690]: CRED_ACQ pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.224681 sshd-session[5690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:02:56.229019 kernel: audit: type=1103 audit(1768352576.223:756): pid=5690 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.229083 kernel: audit: type=1006 audit(1768352576.223:757): pid=5690 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 14 01:02:56.229102 kernel: audit: type=1300 audit(1768352576.223:757): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffece8610 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:02:56.223000 audit[5690]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffece8610 a2=3 a3=0 items=0 ppid=1 pid=5690 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:02:56.231613 systemd-logind[1648]: New session 10 of user core. Jan 14 01:02:56.234133 kernel: audit: type=1327 audit(1768352576.223:757): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:02:56.223000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:02:56.241402 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 01:02:56.244000 audit[5690]: USER_START pid=5690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.248000 audit[5694]: CRED_ACQ pid=5694 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.252123 kernel: audit: type=1105 audit(1768352576.244:758): pid=5690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.252207 kernel: audit: type=1103 audit(1768352576.248:759): pid=5694 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.573865 sshd[5694]: Connection closed by 20.161.92.111 port 47232 Jan 14 01:02:56.574482 sshd-session[5690]: pam_unix(sshd:session): session closed for user core Jan 14 01:02:56.575000 audit[5690]: USER_END pid=5690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.579037 systemd-logind[1648]: Session 10 logged out. Waiting for processes to exit. Jan 14 01:02:56.579786 systemd[1]: sshd@8-10.0.30.209:22-20.161.92.111:47232.service: Deactivated successfully. Jan 14 01:02:56.575000 audit[5690]: CRED_DISP pid=5690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.582178 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 01:02:56.582769 kernel: audit: type=1106 audit(1768352576.575:760): pid=5690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.582958 kernel: audit: type=1104 audit(1768352576.575:761): pid=5690 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:02:56.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.30.209:22-20.161.92.111:47232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:02:56.584586 systemd-logind[1648]: Removed session 10. Jan 14 01:02:59.219219 kubelet[3360]: E0114 01:02:59.219164 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:02:59.220714 kubelet[3360]: E0114 01:02:59.220636 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:03:00.220898 kubelet[3360]: E0114 01:03:00.220837 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:03:01.687097 systemd[1]: Started sshd@9-10.0.30.209:22-20.161.92.111:47236.service - OpenSSH per-connection server daemon (20.161.92.111:47236). Jan 14 01:03:01.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.30.209:22-20.161.92.111:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:01.689292 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:03:01.689337 kernel: audit: type=1130 audit(1768352581.686:763): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.30.209:22-20.161.92.111:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:02.221785 kubelet[3360]: E0114 01:03:02.221728 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:03:02.228000 audit[5713]: USER_ACCT pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.232449 sshd[5713]: Accepted publickey for core from 20.161.92.111 port 47236 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:02.232000 audit[5713]: CRED_ACQ pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.236065 kernel: audit: type=1101 audit(1768352582.228:764): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.236225 kernel: audit: type=1103 audit(1768352582.232:765): pid=5713 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.233722 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:02.239138 kernel: audit: type=1006 audit(1768352582.232:766): pid=5713 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 14 01:03:02.232000 audit[5713]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2ff05d0 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:02.242911 kernel: audit: type=1300 audit(1768352582.232:766): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2ff05d0 a2=3 a3=0 items=0 ppid=1 pid=5713 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:02.232000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:02.244607 kernel: audit: type=1327 audit(1768352582.232:766): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:02.247083 systemd-logind[1648]: New session 11 of user core. Jan 14 01:03:02.257029 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 01:03:02.259000 audit[5713]: USER_START pid=5713 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.264731 kernel: audit: type=1105 audit(1768352582.259:767): pid=5713 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.264000 audit[5717]: CRED_ACQ pid=5717 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.267733 kernel: audit: type=1103 audit(1768352582.264:768): pid=5717 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.599707 sshd[5717]: Connection closed by 20.161.92.111 port 47236 Jan 14 01:03:02.600120 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:02.600000 audit[5713]: USER_END pid=5713 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.607749 kernel: audit: type=1106 audit(1768352582.600:769): pid=5713 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.607829 kernel: audit: type=1104 audit(1768352582.601:770): pid=5713 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.601000 audit[5713]: CRED_DISP pid=5713 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:02.610203 systemd-logind[1648]: Session 11 logged out. Waiting for processes to exit. Jan 14 01:03:02.610374 systemd[1]: sshd@9-10.0.30.209:22-20.161.92.111:47236.service: Deactivated successfully. Jan 14 01:03:02.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.30.209:22-20.161.92.111:47236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:02.614024 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 01:03:02.617164 systemd-logind[1648]: Removed session 11. Jan 14 01:03:02.706760 systemd[1]: Started sshd@10-10.0.30.209:22-20.161.92.111:49508.service - OpenSSH per-connection server daemon (20.161.92.111:49508). Jan 14 01:03:02.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.30.209:22-20.161.92.111:49508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:03.232000 audit[5731]: USER_ACCT pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:03.233633 sshd[5731]: Accepted publickey for core from 20.161.92.111 port 49508 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:03.233000 audit[5731]: CRED_ACQ pid=5731 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:03.234000 audit[5731]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc740f9a0 a2=3 a3=0 items=0 ppid=1 pid=5731 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:03.234000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:03.235474 sshd-session[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:03.239532 systemd-logind[1648]: New session 12 of user core. Jan 14 01:03:03.247918 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 01:03:03.249000 audit[5731]: USER_START pid=5731 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:03.251000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:03.627030 sshd[5735]: Connection closed by 20.161.92.111 port 49508 Jan 14 01:03:03.627371 sshd-session[5731]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:03.629000 audit[5731]: USER_END pid=5731 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:03.629000 audit[5731]: CRED_DISP pid=5731 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:03.633172 systemd[1]: sshd@10-10.0.30.209:22-20.161.92.111:49508.service: Deactivated successfully. Jan 14 01:03:03.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.30.209:22-20.161.92.111:49508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:03.634930 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 01:03:03.635754 systemd-logind[1648]: Session 12 logged out. Waiting for processes to exit. Jan 14 01:03:03.637177 systemd-logind[1648]: Removed session 12. Jan 14 01:03:03.745292 systemd[1]: Started sshd@11-10.0.30.209:22-20.161.92.111:49516.service - OpenSSH per-connection server daemon (20.161.92.111:49516). Jan 14 01:03:03.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.30.209:22-20.161.92.111:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:04.306000 audit[5747]: USER_ACCT pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:04.307550 sshd[5747]: Accepted publickey for core from 20.161.92.111 port 49516 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:04.307000 audit[5747]: CRED_ACQ pid=5747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:04.307000 audit[5747]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc508340 a2=3 a3=0 items=0 ppid=1 pid=5747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:04.307000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:04.309515 sshd-session[5747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:04.313799 systemd-logind[1648]: New session 13 of user core. Jan 14 01:03:04.320887 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 01:03:04.322000 audit[5747]: USER_START pid=5747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:04.324000 audit[5751]: CRED_ACQ pid=5751 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:04.676854 sshd[5751]: Connection closed by 20.161.92.111 port 49516 Jan 14 01:03:04.677190 sshd-session[5747]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:04.677000 audit[5747]: USER_END pid=5747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:04.678000 audit[5747]: CRED_DISP pid=5747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:04.681792 systemd-logind[1648]: Session 13 logged out. Waiting for processes to exit. Jan 14 01:03:04.682258 systemd[1]: sshd@11-10.0.30.209:22-20.161.92.111:49516.service: Deactivated successfully. Jan 14 01:03:04.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.30.209:22-20.161.92.111:49516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:04.684170 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 01:03:04.685580 systemd-logind[1648]: Removed session 13. Jan 14 01:03:05.220099 kubelet[3360]: E0114 01:03:05.220049 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:03:09.219210 kubelet[3360]: E0114 01:03:09.219147 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:03:09.789070 systemd[1]: Started sshd@12-10.0.30.209:22-20.161.92.111:49528.service - OpenSSH per-connection server daemon (20.161.92.111:49528). Jan 14 01:03:09.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.30.209:22-20.161.92.111:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:09.790230 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 14 01:03:09.790286 kernel: audit: type=1130 audit(1768352589.788:790): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.30.209:22-20.161.92.111:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:10.219830 kubelet[3360]: E0114 01:03:10.218844 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:03:10.333000 audit[5767]: USER_ACCT pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.337340 sshd[5767]: Accepted publickey for core from 20.161.92.111 port 49528 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:10.338570 sshd-session[5767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:10.337000 audit[5767]: CRED_ACQ pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.343716 kernel: audit: type=1101 audit(1768352590.333:791): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.343784 kernel: audit: type=1103 audit(1768352590.337:792): pid=5767 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.347663 kernel: audit: type=1006 audit(1768352590.337:793): pid=5767 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 14 01:03:10.347760 kernel: audit: type=1300 audit(1768352590.337:793): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff09b4cd0 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:10.337000 audit[5767]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff09b4cd0 a2=3 a3=0 items=0 ppid=1 pid=5767 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:10.349884 systemd-logind[1648]: New session 14 of user core. Jan 14 01:03:10.337000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:10.356902 kernel: audit: type=1327 audit(1768352590.337:793): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:10.357119 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 01:03:10.362000 audit[5767]: USER_START pid=5767 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.366000 audit[5771]: CRED_ACQ pid=5771 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.372736 kernel: audit: type=1105 audit(1768352590.362:794): pid=5767 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.372835 kernel: audit: type=1103 audit(1768352590.366:795): pid=5771 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.704774 sshd[5771]: Connection closed by 20.161.92.111 port 49528 Jan 14 01:03:10.705589 sshd-session[5767]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:10.706000 audit[5767]: USER_END pid=5767 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.709647 systemd-logind[1648]: Session 14 logged out. Waiting for processes to exit. Jan 14 01:03:10.709943 systemd[1]: sshd@12-10.0.30.209:22-20.161.92.111:49528.service: Deactivated successfully. Jan 14 01:03:10.706000 audit[5767]: CRED_DISP pid=5767 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.712043 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 01:03:10.713797 kernel: audit: type=1106 audit(1768352590.706:796): pid=5767 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.713943 kernel: audit: type=1104 audit(1768352590.706:797): pid=5767 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:10.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.30.209:22-20.161.92.111:49528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:10.714246 systemd-logind[1648]: Removed session 14. Jan 14 01:03:10.817416 systemd[1]: Started sshd@13-10.0.30.209:22-20.161.92.111:49536.service - OpenSSH per-connection server daemon (20.161.92.111:49536). Jan 14 01:03:10.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.30.209:22-20.161.92.111:49536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:11.349000 audit[5785]: USER_ACCT pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:11.350204 sshd[5785]: Accepted publickey for core from 20.161.92.111 port 49536 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:11.350000 audit[5785]: CRED_ACQ pid=5785 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:11.350000 audit[5785]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe8cd9bb0 a2=3 a3=0 items=0 ppid=1 pid=5785 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:11.350000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:11.351980 sshd-session[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:11.356470 systemd-logind[1648]: New session 15 of user core. Jan 14 01:03:11.367106 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 01:03:11.368000 audit[5785]: USER_START pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:11.370000 audit[5789]: CRED_ACQ pid=5789 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:11.780425 sshd[5789]: Connection closed by 20.161.92.111 port 49536 Jan 14 01:03:11.781000 sshd-session[5785]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:11.782000 audit[5785]: USER_END pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:11.782000 audit[5785]: CRED_DISP pid=5785 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:11.786662 systemd[1]: sshd@13-10.0.30.209:22-20.161.92.111:49536.service: Deactivated successfully. Jan 14 01:03:11.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.30.209:22-20.161.92.111:49536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:11.788953 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 01:03:11.790540 systemd-logind[1648]: Session 15 logged out. Waiting for processes to exit. Jan 14 01:03:11.791988 systemd-logind[1648]: Removed session 15. Jan 14 01:03:11.891576 systemd[1]: Started sshd@14-10.0.30.209:22-20.161.92.111:49546.service - OpenSSH per-connection server daemon (20.161.92.111:49546). Jan 14 01:03:11.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.30.209:22-20.161.92.111:49546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:12.420000 audit[5801]: USER_ACCT pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:12.421793 sshd[5801]: Accepted publickey for core from 20.161.92.111 port 49546 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:12.422000 audit[5801]: CRED_ACQ pid=5801 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:12.422000 audit[5801]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeae70cb0 a2=3 a3=0 items=0 ppid=1 pid=5801 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:12.422000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:12.423921 sshd-session[5801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:12.428087 systemd-logind[1648]: New session 16 of user core. Jan 14 01:03:12.441217 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 01:03:12.443000 audit[5801]: USER_START pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:12.445000 audit[5805]: CRED_ACQ pid=5805 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:13.267000 audit[5816]: NETFILTER_CFG table=filter:144 family=2 entries=26 op=nft_register_rule pid=5816 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:03:13.267000 audit[5816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe1cedc50 a2=0 a3=1 items=0 ppid=3494 pid=5816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:13.267000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:03:13.275000 audit[5816]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=5816 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:03:13.275000 audit[5816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe1cedc50 a2=0 a3=1 items=0 ppid=3494 pid=5816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:13.275000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:03:13.296000 audit[5818]: NETFILTER_CFG table=filter:146 family=2 entries=38 op=nft_register_rule pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:03:13.296000 audit[5818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffdb5e2360 a2=0 a3=1 items=0 ppid=3494 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:13.296000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:03:13.302000 audit[5818]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5818 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:03:13.302000 audit[5818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdb5e2360 a2=0 a3=1 items=0 ppid=3494 pid=5818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:13.302000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:03:13.380772 sshd[5805]: Connection closed by 20.161.92.111 port 49546 Jan 14 01:03:13.380257 sshd-session[5801]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:13.381000 audit[5801]: USER_END pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:13.381000 audit[5801]: CRED_DISP pid=5801 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:13.384160 systemd[1]: sshd@14-10.0.30.209:22-20.161.92.111:49546.service: Deactivated successfully. Jan 14 01:03:13.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.30.209:22-20.161.92.111:49546 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:13.386492 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 01:03:13.388029 systemd-logind[1648]: Session 16 logged out. Waiting for processes to exit. Jan 14 01:03:13.389970 systemd-logind[1648]: Removed session 16. Jan 14 01:03:13.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.30.209:22-20.161.92.111:46166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:13.489863 systemd[1]: Started sshd@15-10.0.30.209:22-20.161.92.111:46166.service - OpenSSH per-connection server daemon (20.161.92.111:46166). Jan 14 01:03:14.068000 audit[5823]: USER_ACCT pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:14.069529 sshd[5823]: Accepted publickey for core from 20.161.92.111 port 46166 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:14.069000 audit[5823]: CRED_ACQ pid=5823 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:14.069000 audit[5823]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd125af80 a2=3 a3=0 items=0 ppid=1 pid=5823 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:14.069000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:14.071357 sshd-session[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:14.075969 systemd-logind[1648]: New session 17 of user core. Jan 14 01:03:14.086227 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 01:03:14.088000 audit[5823]: USER_START pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:14.088000 audit[5827]: CRED_ACQ pid=5827 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:14.219294 kubelet[3360]: E0114 01:03:14.219195 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:03:14.220034 kubelet[3360]: E0114 01:03:14.219410 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:03:14.539240 sshd[5827]: Connection closed by 20.161.92.111 port 46166 Jan 14 01:03:14.539926 sshd-session[5823]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:14.542000 audit[5823]: USER_END pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:14.542000 audit[5823]: CRED_DISP pid=5823 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:14.547066 systemd[1]: sshd@15-10.0.30.209:22-20.161.92.111:46166.service: Deactivated successfully. Jan 14 01:03:14.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.30.209:22-20.161.92.111:46166 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:14.548935 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 01:03:14.549678 systemd-logind[1648]: Session 17 logged out. Waiting for processes to exit. Jan 14 01:03:14.550594 systemd-logind[1648]: Removed session 17. Jan 14 01:03:14.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.30.209:22-20.161.92.111:46178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:14.660037 systemd[1]: Started sshd@16-10.0.30.209:22-20.161.92.111:46178.service - OpenSSH per-connection server daemon (20.161.92.111:46178). Jan 14 01:03:15.216000 audit[5839]: USER_ACCT pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.218130 sshd[5839]: Accepted publickey for core from 20.161.92.111 port 46178 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:15.221519 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 14 01:03:15.221615 kernel: audit: type=1101 audit(1768352595.216:831): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.220479 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:15.219000 audit[5839]: CRED_ACQ pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.225937 kernel: audit: type=1103 audit(1768352595.219:832): pid=5839 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.219000 audit[5839]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc18fd10 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:15.232866 kernel: audit: type=1006 audit(1768352595.219:833): pid=5839 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 14 01:03:15.232950 kernel: audit: type=1300 audit(1768352595.219:833): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc18fd10 a2=3 a3=0 items=0 ppid=1 pid=5839 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:15.219000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:15.234336 kernel: audit: type=1327 audit(1768352595.219:833): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:15.234751 systemd-logind[1648]: New session 18 of user core. Jan 14 01:03:15.246898 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 01:03:15.248000 audit[5839]: USER_START pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.249000 audit[5843]: CRED_ACQ pid=5843 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.256496 kernel: audit: type=1105 audit(1768352595.248:834): pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.256610 kernel: audit: type=1103 audit(1768352595.249:835): pid=5843 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.593101 sshd[5843]: Connection closed by 20.161.92.111 port 46178 Jan 14 01:03:15.593504 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:15.595000 audit[5839]: USER_END pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.600161 systemd[1]: sshd@16-10.0.30.209:22-20.161.92.111:46178.service: Deactivated successfully. Jan 14 01:03:15.601909 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 01:03:15.603402 systemd-logind[1648]: Session 18 logged out. Waiting for processes to exit. Jan 14 01:03:15.595000 audit[5839]: CRED_DISP pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.606648 kernel: audit: type=1106 audit(1768352595.595:836): pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.606699 kernel: audit: type=1104 audit(1768352595.595:837): pid=5839 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:15.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.30.209:22-20.161.92.111:46178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:15.609526 systemd-logind[1648]: Removed session 18. Jan 14 01:03:15.609880 kernel: audit: type=1131 audit(1768352595.599:838): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.30.209:22-20.161.92.111:46178 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:16.219546 kubelet[3360]: E0114 01:03:16.219054 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:03:17.220070 kubelet[3360]: E0114 01:03:17.219999 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:03:17.588000 audit[5883]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:03:17.588000 audit[5883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffeb6560a0 a2=0 a3=1 items=0 ppid=3494 pid=5883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:17.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:03:17.594000 audit[5883]: NETFILTER_CFG table=nat:149 family=2 entries=104 op=nft_register_chain pid=5883 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 01:03:17.594000 audit[5883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffeb6560a0 a2=0 a3=1 items=0 ppid=3494 pid=5883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:17.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 01:03:20.702744 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 01:03:20.702879 kernel: audit: type=1130 audit(1768352600.699:841): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.30.209:22-20.161.92.111:46188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:20.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.30.209:22-20.161.92.111:46188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:20.701436 systemd[1]: Started sshd@17-10.0.30.209:22-20.161.92.111:46188.service - OpenSSH per-connection server daemon (20.161.92.111:46188). Jan 14 01:03:21.237000 audit[5885]: USER_ACCT pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.240806 sshd[5885]: Accepted publickey for core from 20.161.92.111 port 46188 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:21.244701 kernel: audit: type=1101 audit(1768352601.237:842): pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.244784 kernel: audit: type=1103 audit(1768352601.242:843): pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.242000 audit[5885]: CRED_ACQ pid=5885 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.244604 sshd-session[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:21.249697 kernel: audit: type=1006 audit(1768352601.242:844): pid=5885 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 14 01:03:21.249763 kernel: audit: type=1300 audit(1768352601.242:844): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc61260 a2=3 a3=0 items=0 ppid=1 pid=5885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:21.242000 audit[5885]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffbc61260 a2=3 a3=0 items=0 ppid=1 pid=5885 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:21.250485 systemd-logind[1648]: New session 19 of user core. Jan 14 01:03:21.242000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:21.254427 kernel: audit: type=1327 audit(1768352601.242:844): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:21.266000 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 01:03:21.266000 audit[5885]: USER_START pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.271000 audit[5889]: CRED_ACQ pid=5889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.276661 kernel: audit: type=1105 audit(1768352601.266:845): pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.276764 kernel: audit: type=1103 audit(1768352601.271:846): pid=5889 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.602420 sshd[5889]: Connection closed by 20.161.92.111 port 46188 Jan 14 01:03:21.602353 sshd-session[5885]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:21.603000 audit[5885]: USER_END pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.607675 systemd[1]: sshd@17-10.0.30.209:22-20.161.92.111:46188.service: Deactivated successfully. Jan 14 01:03:21.609413 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 01:03:21.609701 kernel: audit: type=1106 audit(1768352601.603:847): pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.609752 kernel: audit: type=1104 audit(1768352601.603:848): pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.603000 audit[5885]: CRED_DISP pid=5885 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:21.612874 systemd-logind[1648]: Session 19 logged out. Waiting for processes to exit. Jan 14 01:03:21.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.30.209:22-20.161.92.111:46188 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:21.614531 systemd-logind[1648]: Removed session 19. Jan 14 01:03:23.219671 kubelet[3360]: E0114 01:03:23.219601 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:03:23.219671 kubelet[3360]: E0114 01:03:23.219614 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:03:25.219521 kubelet[3360]: E0114 01:03:25.219461 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:03:26.220476 kubelet[3360]: E0114 01:03:26.220432 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:03:26.713118 systemd[1]: Started sshd@18-10.0.30.209:22-20.161.92.111:53528.service - OpenSSH per-connection server daemon (20.161.92.111:53528). Jan 14 01:03:26.714052 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:03:26.714093 kernel: audit: type=1130 audit(1768352606.711:850): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.30.209:22-20.161.92.111:53528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:26.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.30.209:22-20.161.92.111:53528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:27.259000 audit[5902]: USER_ACCT pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.261885 sshd[5902]: Accepted publickey for core from 20.161.92.111 port 53528 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:27.262000 audit[5902]: CRED_ACQ pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.264681 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:27.267706 kernel: audit: type=1101 audit(1768352607.259:851): pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.267783 kernel: audit: type=1103 audit(1768352607.262:852): pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.267811 kernel: audit: type=1006 audit(1768352607.262:853): pid=5902 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 14 01:03:27.269510 kernel: audit: type=1300 audit(1768352607.262:853): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa6bb270 a2=3 a3=0 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:27.262000 audit[5902]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa6bb270 a2=3 a3=0 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:27.269063 systemd-logind[1648]: New session 20 of user core. Jan 14 01:03:27.262000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:27.273989 kernel: audit: type=1327 audit(1768352607.262:853): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:27.280935 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 01:03:27.282000 audit[5902]: USER_START pid=5902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.284000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.291504 kernel: audit: type=1105 audit(1768352607.282:854): pid=5902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.291583 kernel: audit: type=1103 audit(1768352607.284:855): pid=5906 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.660408 sshd[5906]: Connection closed by 20.161.92.111 port 53528 Jan 14 01:03:27.659882 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:27.659000 audit[5902]: USER_END pid=5902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.663911 systemd[1]: sshd@18-10.0.30.209:22-20.161.92.111:53528.service: Deactivated successfully. Jan 14 01:03:27.659000 audit[5902]: CRED_DISP pid=5902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.665942 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 01:03:27.668378 kernel: audit: type=1106 audit(1768352607.659:856): pid=5902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.668515 kernel: audit: type=1104 audit(1768352607.659:857): pid=5902 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:27.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.30.209:22-20.161.92.111:53528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:27.669401 systemd-logind[1648]: Session 20 logged out. Waiting for processes to exit. Jan 14 01:03:27.670650 systemd-logind[1648]: Removed session 20. Jan 14 01:03:28.219607 kubelet[3360]: E0114 01:03:28.219399 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:03:28.220904 kubelet[3360]: E0114 01:03:28.220867 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:03:32.769675 systemd[1]: Started sshd@19-10.0.30.209:22-20.161.92.111:34114.service - OpenSSH per-connection server daemon (20.161.92.111:34114). Jan 14 01:03:32.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.30.209:22-20.161.92.111:34114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:32.771053 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:03:32.771733 kernel: audit: type=1130 audit(1768352612.768:859): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.30.209:22-20.161.92.111:34114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:33.298000 audit[5921]: USER_ACCT pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.304037 sshd[5921]: Accepted publickey for core from 20.161.92.111 port 34114 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:33.304722 kernel: audit: type=1101 audit(1768352613.298:860): pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.303000 audit[5921]: CRED_ACQ pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.305729 sshd-session[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:33.309719 kernel: audit: type=1103 audit(1768352613.303:861): pid=5921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.309785 kernel: audit: type=1006 audit(1768352613.303:862): pid=5921 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 14 01:03:33.303000 audit[5921]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6aaf840 a2=3 a3=0 items=0 ppid=1 pid=5921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:33.313754 kernel: audit: type=1300 audit(1768352613.303:862): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc6aaf840 a2=3 a3=0 items=0 ppid=1 pid=5921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:33.313841 kernel: audit: type=1327 audit(1768352613.303:862): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:33.303000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:33.317190 systemd-logind[1648]: New session 21 of user core. Jan 14 01:03:33.327122 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 01:03:33.327000 audit[5921]: USER_START pid=5921 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.332000 audit[5925]: CRED_ACQ pid=5925 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.336327 kernel: audit: type=1105 audit(1768352613.327:863): pid=5921 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.336369 kernel: audit: type=1103 audit(1768352613.332:864): pid=5925 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.655601 sshd[5925]: Connection closed by 20.161.92.111 port 34114 Jan 14 01:03:33.655956 sshd-session[5921]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:33.655000 audit[5921]: USER_END pid=5921 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.661823 systemd[1]: sshd@19-10.0.30.209:22-20.161.92.111:34114.service: Deactivated successfully. Jan 14 01:03:33.655000 audit[5921]: CRED_DISP pid=5921 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.663557 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 01:03:33.664337 systemd-logind[1648]: Session 21 logged out. Waiting for processes to exit. Jan 14 01:03:33.665119 kernel: audit: type=1106 audit(1768352613.655:865): pid=5921 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.665211 kernel: audit: type=1104 audit(1768352613.655:866): pid=5921 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:33.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.30.209:22-20.161.92.111:34114 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:33.665974 systemd-logind[1648]: Removed session 21. Jan 14 01:03:38.221258 kubelet[3360]: E0114 01:03:38.220748 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:03:38.221914 kubelet[3360]: E0114 01:03:38.221867 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:03:38.222764 kubelet[3360]: E0114 01:03:38.222387 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:03:38.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.30.209:22-20.161.92.111:34120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:38.766868 systemd[1]: Started sshd@20-10.0.30.209:22-20.161.92.111:34120.service - OpenSSH per-connection server daemon (20.161.92.111:34120). Jan 14 01:03:38.770223 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:03:38.770313 kernel: audit: type=1130 audit(1768352618.765:868): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.30.209:22-20.161.92.111:34120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:39.292000 audit[5940]: USER_ACCT pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.294905 sshd[5940]: Accepted publickey for core from 20.161.92.111 port 34120 ssh2: RSA SHA256:x8Un1xsZ1QF4t1gUUSgtM8XboO+bSuWmauoYAOP7aOY Jan 14 01:03:39.296000 audit[5940]: CRED_ACQ pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.298871 sshd-session[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 01:03:39.301824 kernel: audit: type=1101 audit(1768352619.292:869): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.301908 kernel: audit: type=1103 audit(1768352619.296:870): pid=5940 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.304193 kernel: audit: type=1006 audit(1768352619.296:871): pid=5940 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 14 01:03:39.296000 audit[5940]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc39af90 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:39.306385 systemd-logind[1648]: New session 22 of user core. Jan 14 01:03:39.307664 kernel: audit: type=1300 audit(1768352619.296:871): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc39af90 a2=3 a3=0 items=0 ppid=1 pid=5940 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:03:39.307729 kernel: audit: type=1327 audit(1768352619.296:871): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:39.296000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 01:03:39.312932 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 01:03:39.314000 audit[5940]: USER_START pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.316000 audit[5944]: CRED_ACQ pid=5944 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.322471 kernel: audit: type=1105 audit(1768352619.314:872): pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.322537 kernel: audit: type=1103 audit(1768352619.316:873): pid=5944 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.654279 sshd[5944]: Connection closed by 20.161.92.111 port 34120 Jan 14 01:03:39.654136 sshd-session[5940]: pam_unix(sshd:session): session closed for user core Jan 14 01:03:39.654000 audit[5940]: USER_END pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.659584 systemd[1]: sshd@20-10.0.30.209:22-20.161.92.111:34120.service: Deactivated successfully. Jan 14 01:03:39.655000 audit[5940]: CRED_DISP pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.662777 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 01:03:39.663865 kernel: audit: type=1106 audit(1768352619.654:874): pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.663935 kernel: audit: type=1104 audit(1768352619.655:875): pid=5940 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 01:03:39.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.30.209:22-20.161.92.111:34120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 01:03:39.664266 systemd-logind[1648]: Session 22 logged out. Waiting for processes to exit. Jan 14 01:03:39.665895 systemd-logind[1648]: Removed session 22. Jan 14 01:03:40.219894 kubelet[3360]: E0114 01:03:40.219834 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:03:43.218682 kubelet[3360]: E0114 01:03:43.218617 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:03:43.219327 kubelet[3360]: E0114 01:03:43.219253 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:03:50.218061 kubelet[3360]: E0114 01:03:50.218000 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:03:51.219519 kubelet[3360]: E0114 01:03:51.219468 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:03:52.219983 kubelet[3360]: E0114 01:03:52.219867 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:03:53.219228 kubelet[3360]: E0114 01:03:53.218894 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:03:54.219165 kubelet[3360]: E0114 01:03:54.219104 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:03:56.219656 kubelet[3360]: E0114 01:03:56.219307 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:04:01.218344 kubelet[3360]: E0114 01:04:01.218231 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:04:05.218799 kubelet[3360]: E0114 01:04:05.218744 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:04:06.219080 containerd[1673]: time="2026-01-14T01:04:06.218958925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 01:04:06.571464 containerd[1673]: time="2026-01-14T01:04:06.571384605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:04:06.572809 containerd[1673]: time="2026-01-14T01:04:06.572753809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 01:04:06.572886 containerd[1673]: time="2026-01-14T01:04:06.572843969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 01:04:06.573057 kubelet[3360]: E0114 01:04:06.573017 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:04:06.573550 kubelet[3360]: E0114 01:04:06.573353 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 01:04:06.573550 kubelet[3360]: E0114 01:04:06.573485 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:2290a9e2f64b4bbd8cad9f1beca215c0,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 01:04:06.575409 containerd[1673]: time="2026-01-14T01:04:06.575362217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 01:04:06.914616 containerd[1673]: time="2026-01-14T01:04:06.914479056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:04:06.916806 containerd[1673]: time="2026-01-14T01:04:06.916676303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 01:04:06.916806 containerd[1673]: time="2026-01-14T01:04:06.916744743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 01:04:06.917083 kubelet[3360]: E0114 01:04:06.917044 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:04:06.917148 kubelet[3360]: E0114 01:04:06.917096 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 01:04:06.917256 kubelet[3360]: E0114 01:04:06.917217 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2knrg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-85d9c4c575-qvf7c_calico-system(e7eb783d-4ded-4f2b-bd86-b988e9af5765): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 01:04:06.918448 kubelet[3360]: E0114 01:04:06.918379 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:04:08.220243 containerd[1673]: time="2026-01-14T01:04:08.220181296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 01:04:08.220735 kubelet[3360]: E0114 01:04:08.220297 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5456h" podUID="3fa9e714-815e-4dce-822e-1a57156e24fa" Jan 14 01:04:08.543424 containerd[1673]: time="2026-01-14T01:04:08.543337847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:04:08.545341 containerd[1673]: time="2026-01-14T01:04:08.545290893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 01:04:08.545445 containerd[1673]: time="2026-01-14T01:04:08.545327573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 01:04:08.545646 kubelet[3360]: E0114 01:04:08.545591 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:04:08.545646 kubelet[3360]: E0114 01:04:08.545645 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 01:04:08.545947 kubelet[3360]: E0114 01:04:08.545808 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh8rh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-645b4cd96c-cs27s_calico-system(9d6eeb8e-19e2-43b0-aa84-76d3b0583005): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 01:04:08.547023 kubelet[3360]: E0114 01:04:08.546985 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005" Jan 14 01:04:09.221708 containerd[1673]: time="2026-01-14T01:04:09.221369164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 01:04:09.536753 containerd[1673]: time="2026-01-14T01:04:09.536547610Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:04:09.538463 containerd[1673]: time="2026-01-14T01:04:09.538333575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 01:04:09.538463 containerd[1673]: time="2026-01-14T01:04:09.538411415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 01:04:09.538639 kubelet[3360]: E0114 01:04:09.538575 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:04:09.538639 kubelet[3360]: E0114 01:04:09.538634 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 01:04:09.538952 kubelet[3360]: E0114 01:04:09.538808 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gshth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9bc96f9cf-lvk8t_calico-apiserver(b8e2c676-74d2-4fbb-b79b-4d5fa6599826): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 01:04:09.540014 kubelet[3360]: E0114 01:04:09.539972 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-lvk8t" podUID="b8e2c676-74d2-4fbb-b79b-4d5fa6599826" Jan 14 01:04:10.564138 systemd[1]: cri-containerd-4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27.scope: Deactivated successfully. Jan 14 01:04:10.564632 systemd[1]: cri-containerd-4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27.scope: Consumed 34.217s CPU time, 117M memory peak. Jan 14 01:04:10.565598 containerd[1673]: time="2026-01-14T01:04:10.565564003Z" level=info msg="received container exit event container_id:\"4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27\" id:\"4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27\" pid:3696 exit_status:1 exited_at:{seconds:1768352650 nanos:565245842}" Jan 14 01:04:10.567000 audit: BPF prog-id=146 op=UNLOAD Jan 14 01:04:10.569887 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 14 01:04:10.569940 kernel: audit: type=1334 audit(1768352650.567:877): prog-id=146 op=UNLOAD Jan 14 01:04:10.567000 audit: BPF prog-id=150 op=UNLOAD Jan 14 01:04:10.571818 kernel: audit: type=1334 audit(1768352650.567:878): prog-id=150 op=UNLOAD Jan 14 01:04:10.585090 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27-rootfs.mount: Deactivated successfully. Jan 14 01:04:10.721586 systemd[1]: cri-containerd-c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8.scope: Deactivated successfully. Jan 14 01:04:10.722547 systemd[1]: cri-containerd-c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8.scope: Consumed 4.537s CPU time, 65.3M memory peak. Jan 14 01:04:10.721000 audit: BPF prog-id=256 op=LOAD Jan 14 01:04:10.725221 kubelet[3360]: E0114 01:04:10.724868 3360 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.30.209:34482->10.0.30.244:2379: read: connection timed out" Jan 14 01:04:10.721000 audit: BPF prog-id=83 op=UNLOAD Jan 14 01:04:10.727838 kernel: audit: type=1334 audit(1768352650.721:879): prog-id=256 op=LOAD Jan 14 01:04:10.727896 kernel: audit: type=1334 audit(1768352650.721:880): prog-id=83 op=UNLOAD Jan 14 01:04:10.727930 containerd[1673]: time="2026-01-14T01:04:10.726532096Z" level=info msg="received container exit event container_id:\"c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8\" id:\"c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8\" pid:3170 exit_status:1 exited_at:{seconds:1768352650 nanos:726099455}" Jan 14 01:04:10.730000 audit: BPF prog-id=98 op=UNLOAD Jan 14 01:04:10.730000 audit: BPF prog-id=102 op=UNLOAD Jan 14 01:04:10.733796 kernel: audit: type=1334 audit(1768352650.730:881): prog-id=98 op=UNLOAD Jan 14 01:04:10.733853 kernel: audit: type=1334 audit(1768352650.730:882): prog-id=102 op=UNLOAD Jan 14 01:04:10.749990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8-rootfs.mount: Deactivated successfully. Jan 14 01:04:10.759063 kubelet[3360]: I0114 01:04:10.759036 3360 scope.go:117] "RemoveContainer" containerID="4c8a7f4d7699f7a8174bf912dc2ec0061568f0cbb0d026ca6cc3e2e53ad0cc27" Jan 14 01:04:10.761343 kubelet[3360]: I0114 01:04:10.761319 3360 scope.go:117] "RemoveContainer" containerID="c407ad347dbe462e7a1e338cfe7975b1722b082e2b286255981eab66d7573fd8" Jan 14 01:04:10.762201 containerd[1673]: time="2026-01-14T01:04:10.762157205Z" level=info msg="CreateContainer within sandbox \"20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 01:04:10.763463 containerd[1673]: time="2026-01-14T01:04:10.763196808Z" level=info msg="CreateContainer within sandbox \"c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 01:04:10.771263 containerd[1673]: time="2026-01-14T01:04:10.771221793Z" level=info msg="Container 9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:04:10.776713 containerd[1673]: time="2026-01-14T01:04:10.776454769Z" level=info msg="Container b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:04:10.781094 containerd[1673]: time="2026-01-14T01:04:10.781064983Z" level=info msg="CreateContainer within sandbox \"20141d2fa5e5a70faa0218b87d82c1c158ecbc620408ed59e7cda48fdf8be78b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3\"" Jan 14 01:04:10.781799 containerd[1673]: time="2026-01-14T01:04:10.781776705Z" level=info msg="StartContainer for \"9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3\"" Jan 14 01:04:10.782895 containerd[1673]: time="2026-01-14T01:04:10.782868868Z" level=info msg="connecting to shim 9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3" address="unix:///run/containerd/s/944a0878fea4befa0aa9b19fcf3557c03bc26290e54253e50e84257ece81b340" protocol=ttrpc version=3 Jan 14 01:04:10.787132 containerd[1673]: time="2026-01-14T01:04:10.787023641Z" level=info msg="CreateContainer within sandbox \"c56e0e4d5bbaecc24e3f6737e74ff701518968bc88d189efc85f2b98efd406cc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0\"" Jan 14 01:04:10.788145 containerd[1673]: time="2026-01-14T01:04:10.788114845Z" level=info msg="StartContainer for \"b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0\"" Jan 14 01:04:10.789546 containerd[1673]: time="2026-01-14T01:04:10.789521409Z" level=info msg="connecting to shim b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0" address="unix:///run/containerd/s/7cd5e2bb2f6d4ed7c37d5d3507ae87598a9d5700dfbbff909807ff7d5ff7379a" protocol=ttrpc version=3 Jan 14 01:04:10.803892 systemd[1]: Started cri-containerd-9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3.scope - libcontainer container 9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3. Jan 14 01:04:10.806874 systemd[1]: Started cri-containerd-b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0.scope - libcontainer container b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0. Jan 14 01:04:10.814000 audit: BPF prog-id=257 op=LOAD Jan 14 01:04:10.818721 kernel: audit: type=1334 audit(1768352650.814:883): prog-id=257 op=LOAD Jan 14 01:04:10.823267 kernel: audit: type=1334 audit(1768352650.816:884): prog-id=258 op=LOAD Jan 14 01:04:10.823325 kernel: audit: type=1300 audit(1768352650.816:884): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.816000 audit: BPF prog-id=258 op=LOAD Jan 14 01:04:10.816000 audit[6018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.816000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.827158 kernel: audit: type=1327 audit(1768352650.816:884): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.817000 audit: BPF prog-id=258 op=UNLOAD Jan 14 01:04:10.817000 audit[6018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.817000 audit: BPF prog-id=259 op=LOAD Jan 14 01:04:10.817000 audit[6018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.817000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.821000 audit: BPF prog-id=260 op=LOAD Jan 14 01:04:10.821000 audit[6018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.821000 audit: BPF prog-id=261 op=LOAD Jan 14 01:04:10.821000 audit: BPF prog-id=260 op=UNLOAD Jan 14 01:04:10.821000 audit[6018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.821000 audit: BPF prog-id=259 op=UNLOAD Jan 14 01:04:10.821000 audit[6018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.821000 audit: BPF prog-id=262 op=LOAD Jan 14 01:04:10.821000 audit[6018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3482 pid=6018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.821000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935373835363563376138613638663037306638383362383730363962 Jan 14 01:04:10.822000 audit: BPF prog-id=263 op=LOAD Jan 14 01:04:10.822000 audit[6026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.822000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.825000 audit: BPF prog-id=263 op=UNLOAD Jan 14 01:04:10.825000 audit[6026]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.825000 audit: BPF prog-id=264 op=LOAD Jan 14 01:04:10.825000 audit[6026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.825000 audit: BPF prog-id=265 op=LOAD Jan 14 01:04:10.825000 audit[6026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.825000 audit: BPF prog-id=265 op=UNLOAD Jan 14 01:04:10.825000 audit[6026]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.825000 audit: BPF prog-id=264 op=UNLOAD Jan 14 01:04:10.825000 audit[6026]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.825000 audit: BPF prog-id=266 op=LOAD Jan 14 01:04:10.825000 audit[6026]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3042 pid=6026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:10.825000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6234383439616139636633303431303263323761366634633065366262 Jan 14 01:04:10.848494 containerd[1673]: time="2026-01-14T01:04:10.848440509Z" level=info msg="StartContainer for \"9578565c7a8a68f070f883b87069b1cf257dc2fa7ab75d8116324697f0de93d3\" returns successfully" Jan 14 01:04:10.861063 containerd[1673]: time="2026-01-14T01:04:10.861023628Z" level=info msg="StartContainer for \"b4849aa9cf304102c27a6f4c0e6bbed3d9131a5186c90078f805f403bbb2b0e0\" returns successfully" Jan 14 01:04:11.684080 kubelet[3360]: E0114 01:04:11.683909 3360 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.30.209:34298->10.0.30.244:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-9bc96f9cf-6x9dt.188a7336adf08c17 calico-apiserver 1768 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-9bc96f9cf-6x9dt,UID:19f24365-565a-4989-ac4f-5964349e35e5,APIVersion:v1,ResourceVersion:841,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-a666ba3d92,},FirstTimestamp:2026-01-14 01:01:22 +0000 UTC,LastTimestamp:2026-01-14 01:04:01.218176403 +0000 UTC m=+213.088306258,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-a666ba3d92,}" Jan 14 01:04:12.219965 kubelet[3360]: E0114 01:04:12.219916 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9bc96f9cf-6x9dt" podUID="19f24365-565a-4989-ac4f-5964349e35e5" Jan 14 01:04:15.759681 systemd[1]: cri-containerd-1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8.scope: Deactivated successfully. Jan 14 01:04:15.760028 systemd[1]: cri-containerd-1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8.scope: Consumed 4.279s CPU time, 24.6M memory peak. Jan 14 01:04:15.758000 audit: BPF prog-id=267 op=LOAD Jan 14 01:04:15.761912 containerd[1673]: time="2026-01-14T01:04:15.761873800Z" level=info msg="received container exit event container_id:\"1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8\" id:\"1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8\" pid:3201 exit_status:1 exited_at:{seconds:1768352655 nanos:761528279}" Jan 14 01:04:15.762158 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 14 01:04:15.762190 kernel: audit: type=1334 audit(1768352655.758:899): prog-id=267 op=LOAD Jan 14 01:04:15.762206 kernel: audit: type=1334 audit(1768352655.758:900): prog-id=93 op=UNLOAD Jan 14 01:04:15.758000 audit: BPF prog-id=93 op=UNLOAD Jan 14 01:04:15.764000 audit: BPF prog-id=103 op=UNLOAD Jan 14 01:04:15.764000 audit: BPF prog-id=107 op=UNLOAD Jan 14 01:04:15.767712 kernel: audit: type=1334 audit(1768352655.764:901): prog-id=103 op=UNLOAD Jan 14 01:04:15.767752 kernel: audit: type=1334 audit(1768352655.764:902): prog-id=107 op=UNLOAD Jan 14 01:04:15.783598 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8-rootfs.mount: Deactivated successfully. Jan 14 01:04:16.780564 kubelet[3360]: I0114 01:04:16.780537 3360 scope.go:117] "RemoveContainer" containerID="1711190190a8eb217c55715983eae366d3ad69c07e53949f2510ca76da873cc8" Jan 14 01:04:16.782673 containerd[1673]: time="2026-01-14T01:04:16.782642727Z" level=info msg="CreateContainer within sandbox \"33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 01:04:16.795220 containerd[1673]: time="2026-01-14T01:04:16.794475164Z" level=info msg="Container 547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b: CDI devices from CRI Config.CDIDevices: []" Jan 14 01:04:16.805866 containerd[1673]: time="2026-01-14T01:04:16.805833078Z" level=info msg="CreateContainer within sandbox \"33af38dd2d48c62f74c3d0a969fcace00817e1f70ba6ffda7d16a1494be3eaeb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b\"" Jan 14 01:04:16.806490 containerd[1673]: time="2026-01-14T01:04:16.806455280Z" level=info msg="StartContainer for \"547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b\"" Jan 14 01:04:16.807557 containerd[1673]: time="2026-01-14T01:04:16.807526524Z" level=info msg="connecting to shim 547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b" address="unix:///run/containerd/s/fce870e94a27f5d02c6431250f80d9d956783842f49cfe549f0bfe502bb5d32b" protocol=ttrpc version=3 Jan 14 01:04:16.828006 systemd[1]: Started cri-containerd-547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b.scope - libcontainer container 547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b. Jan 14 01:04:16.837000 audit: BPF prog-id=268 op=LOAD Jan 14 01:04:16.838000 audit: BPF prog-id=269 op=LOAD Jan 14 01:04:16.840905 kernel: audit: type=1334 audit(1768352656.837:903): prog-id=268 op=LOAD Jan 14 01:04:16.840966 kernel: audit: type=1334 audit(1768352656.838:904): prog-id=269 op=LOAD Jan 14 01:04:16.840993 kernel: audit: type=1300 audit(1768352656.838:904): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.838000 audit[6123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.847901 kernel: audit: type=1327 audit(1768352656.838:904): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.848047 kernel: audit: type=1334 audit(1768352656.838:905): prog-id=269 op=UNLOAD Jan 14 01:04:16.838000 audit: BPF prog-id=269 op=UNLOAD Jan 14 01:04:16.838000 audit[6123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.852322 kernel: audit: type=1300 audit(1768352656.838:905): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.838000 audit: BPF prog-id=270 op=LOAD Jan 14 01:04:16.838000 audit[6123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.838000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.839000 audit: BPF prog-id=271 op=LOAD Jan 14 01:04:16.839000 audit[6123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.843000 audit: BPF prog-id=271 op=UNLOAD Jan 14 01:04:16.843000 audit[6123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.843000 audit: BPF prog-id=270 op=UNLOAD Jan 14 01:04:16.843000 audit[6123]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.843000 audit: BPF prog-id=272 op=LOAD Jan 14 01:04:16.843000 audit[6123]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3086 pid=6123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 01:04:16.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3534376634373261616639323639636665346461656533316164663830 Jan 14 01:04:16.874359 containerd[1673]: time="2026-01-14T01:04:16.874323608Z" level=info msg="StartContainer for \"547f472aaf9269cfe4daee31adf800a215dbb12e22b15c66afeaf9b3fa20aa1b\" returns successfully" Jan 14 01:04:17.956498 kubelet[3360]: I0114 01:04:17.956438 3360 status_manager.go:895] "Failed to get status for pod" podUID="fcbd49d1-6cc7-4a4f-a4c7-314217c673eb" pod="tigera-operator/tigera-operator-7dcd859c48-hvztc" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.30.209:34416->10.0.30.244:2379: read: connection timed out" Jan 14 01:04:19.218764 containerd[1673]: time="2026-01-14T01:04:19.218722224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 01:04:19.552184 containerd[1673]: time="2026-01-14T01:04:19.551872517Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:04:19.553443 containerd[1673]: time="2026-01-14T01:04:19.553409281Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 01:04:19.553611 containerd[1673]: time="2026-01-14T01:04:19.553496801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 01:04:19.553908 kubelet[3360]: E0114 01:04:19.553871 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:04:19.554459 kubelet[3360]: E0114 01:04:19.554222 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 01:04:19.554459 kubelet[3360]: E0114 01:04:19.554409 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 01:04:19.556706 containerd[1673]: time="2026-01-14T01:04:19.556563171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 01:04:19.899116 containerd[1673]: time="2026-01-14T01:04:19.898911012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 01:04:19.900642 containerd[1673]: time="2026-01-14T01:04:19.900579697Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 01:04:19.900745 containerd[1673]: time="2026-01-14T01:04:19.900681177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 01:04:19.900935 kubelet[3360]: E0114 01:04:19.900892 3360 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:04:19.900991 kubelet[3360]: E0114 01:04:19.900949 3360 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 01:04:19.901119 kubelet[3360]: E0114 01:04:19.901071 3360 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2d5x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7cqlb_calico-system(156d3f01-7b19-463c-9dd8-133de12239c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 01:04:19.902368 kubelet[3360]: E0114 01:04:19.902325 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7cqlb" podUID="156d3f01-7b19-463c-9dd8-133de12239c1" Jan 14 01:04:20.163803 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Jan 14 01:04:20.219810 kubelet[3360]: E0114 01:04:20.219735 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85d9c4c575-qvf7c" podUID="e7eb783d-4ded-4f2b-bd86-b988e9af5765" Jan 14 01:04:20.725319 kubelet[3360]: E0114 01:04:20.725269 3360 controller.go:195] "Failed to update lease" err="Put \"https://10.0.30.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-a666ba3d92?timeout=10s\": context deadline exceeded" Jan 14 01:04:21.218513 kubelet[3360]: E0114 01:04:21.218473 3360 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-645b4cd96c-cs27s" podUID="9d6eeb8e-19e2-43b0-aa84-76d3b0583005"