Jan 28 00:01:15.518753 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 28 00:01:15.518779 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 27 22:20:26 -00 2026 Jan 28 00:01:15.518790 kernel: KASLR enabled Jan 28 00:01:15.518796 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 28 00:01:15.518802 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 28 00:01:15.518808 kernel: random: crng init done Jan 28 00:01:15.518816 kernel: secureboot: Secure boot disabled Jan 28 00:01:15.518822 kernel: ACPI: Early table checksum verification disabled Jan 28 00:01:15.518828 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 28 00:01:15.518836 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 28 00:01:15.518843 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518849 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518855 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518861 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518870 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518877 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518884 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518891 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518897 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:01:15.518904 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 28 00:01:15.518910 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 28 00:01:15.518917 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 28 00:01:15.518923 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 28 00:01:15.518932 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 28 00:01:15.518938 kernel: Zone ranges: Jan 28 00:01:15.518945 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 28 00:01:15.518951 kernel: DMA32 empty Jan 28 00:01:15.518958 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 28 00:01:15.518964 kernel: Device empty Jan 28 00:01:15.518971 kernel: Movable zone start for each node Jan 28 00:01:15.518977 kernel: Early memory node ranges Jan 28 00:01:15.518984 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 28 00:01:15.518990 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 28 00:01:15.519010 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 28 00:01:15.519017 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 28 00:01:15.519026 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 28 00:01:15.519033 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 28 00:01:15.519039 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 28 00:01:15.519046 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 28 00:01:15.519052 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 28 00:01:15.519062 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 28 00:01:15.519070 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 28 00:01:15.519078 kernel: cma: Reserved 16 MiB at 0x00000000fe600000 on node -1 Jan 28 00:01:15.519086 kernel: psci: probing for conduit method from ACPI. Jan 28 00:01:15.519094 kernel: psci: PSCIv1.1 detected in firmware. Jan 28 00:01:15.519102 kernel: psci: Using standard PSCI v0.2 function IDs Jan 28 00:01:15.519110 kernel: psci: Trusted OS migration not required Jan 28 00:01:15.519119 kernel: psci: SMC Calling Convention v1.1 Jan 28 00:01:15.519127 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 28 00:01:15.519136 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 28 00:01:15.519143 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 28 00:01:15.519150 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 28 00:01:15.519157 kernel: Detected PIPT I-cache on CPU0 Jan 28 00:01:15.519164 kernel: CPU features: detected: GIC system register CPU interface Jan 28 00:01:15.519171 kernel: CPU features: detected: Spectre-v4 Jan 28 00:01:15.519178 kernel: CPU features: detected: Spectre-BHB Jan 28 00:01:15.519185 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 28 00:01:15.519192 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 28 00:01:15.519199 kernel: CPU features: detected: ARM erratum 1418040 Jan 28 00:01:15.519206 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 28 00:01:15.519214 kernel: alternatives: applying boot alternatives Jan 28 00:01:15.519223 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=880c7a57ca1a4cf41361128ef304e12abcda0ba85f8697ad932e9820a1865169 Jan 28 00:01:15.519230 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 28 00:01:15.519238 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:01:15.519244 kernel: Fallback order for Node 0: 0 Jan 28 00:01:15.519252 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 28 00:01:15.519259 kernel: Policy zone: Normal Jan 28 00:01:15.519266 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:01:15.519273 kernel: software IO TLB: area num 2. Jan 28 00:01:15.519280 kernel: software IO TLB: mapped [mem 0x00000000fa600000-0x00000000fe600000] (64MB) Jan 28 00:01:15.519288 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 28 00:01:15.519295 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:01:15.519303 kernel: rcu: RCU event tracing is enabled. Jan 28 00:01:15.519310 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 28 00:01:15.519318 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:01:15.519325 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:01:15.519332 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:01:15.519339 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 28 00:01:15.519346 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:01:15.519354 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 28 00:01:15.519361 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 28 00:01:15.519369 kernel: GICv3: 256 SPIs implemented Jan 28 00:01:15.519376 kernel: GICv3: 0 Extended SPIs implemented Jan 28 00:01:15.519383 kernel: Root IRQ handler: gic_handle_irq Jan 28 00:01:15.519390 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 28 00:01:15.519396 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 28 00:01:15.519403 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 28 00:01:15.519410 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 28 00:01:15.519418 kernel: ITS@0x0000000008080000: allocated 8192 Devices @101d00000 (indirect, esz 8, psz 64K, shr 1) Jan 28 00:01:15.519425 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @101d10000 (flat, esz 8, psz 64K, shr 1) Jan 28 00:01:15.519432 kernel: GICv3: using LPI property table @0x0000000101d20000 Jan 28 00:01:15.519439 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000101d30000 Jan 28 00:01:15.519447 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:01:15.519454 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 00:01:15.519462 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 28 00:01:15.519469 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 28 00:01:15.519476 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 28 00:01:15.519483 kernel: Console: colour dummy device 80x25 Jan 28 00:01:15.519491 kernel: ACPI: Core revision 20240827 Jan 28 00:01:15.519499 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 28 00:01:15.519507 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:01:15.519515 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 28 00:01:15.519522 kernel: landlock: Up and running. Jan 28 00:01:15.519530 kernel: SELinux: Initializing. Jan 28 00:01:15.519537 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:01:15.519544 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 28 00:01:15.519552 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:01:15.519559 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:01:15.519567 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 28 00:01:15.519575 kernel: Remapping and enabling EFI services. Jan 28 00:01:15.519582 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:01:15.519598 kernel: Detected PIPT I-cache on CPU1 Jan 28 00:01:15.519607 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 28 00:01:15.519630 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000101d40000 Jan 28 00:01:15.519638 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 28 00:01:15.519645 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 28 00:01:15.519655 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 00:01:15.519663 kernel: SMP: Total of 2 processors activated. Jan 28 00:01:15.519674 kernel: CPU: All CPU(s) started at EL1 Jan 28 00:01:15.519684 kernel: CPU features: detected: 32-bit EL0 Support Jan 28 00:01:15.519691 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 28 00:01:15.519699 kernel: CPU features: detected: Common not Private translations Jan 28 00:01:15.519707 kernel: CPU features: detected: CRC32 instructions Jan 28 00:01:15.519715 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 28 00:01:15.519724 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 28 00:01:15.519732 kernel: CPU features: detected: LSE atomic instructions Jan 28 00:01:15.519740 kernel: CPU features: detected: Privileged Access Never Jan 28 00:01:15.519747 kernel: CPU features: detected: RAS Extension Support Jan 28 00:01:15.519755 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 28 00:01:15.519763 kernel: alternatives: applying system-wide alternatives Jan 28 00:01:15.519773 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 28 00:01:15.519781 kernel: Memory: 3885860K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 188660K reserved, 16384K cma-reserved) Jan 28 00:01:15.519789 kernel: devtmpfs: initialized Jan 28 00:01:15.519798 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:01:15.519806 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 28 00:01:15.519814 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 28 00:01:15.519822 kernel: 0 pages in range for non-PLT usage Jan 28 00:01:15.519831 kernel: 515152 pages in range for PLT usage Jan 28 00:01:15.519839 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:01:15.519846 kernel: SMBIOS 3.0.0 present. Jan 28 00:01:15.519854 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 28 00:01:15.519862 kernel: DMI: Memory slots populated: 1/1 Jan 28 00:01:15.519870 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:01:15.519878 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 28 00:01:15.519887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 28 00:01:15.519896 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 28 00:01:15.519904 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:01:15.519911 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 28 00:01:15.519919 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:01:15.519927 kernel: cpuidle: using governor menu Jan 28 00:01:15.519935 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 28 00:01:15.519945 kernel: ASID allocator initialised with 32768 entries Jan 28 00:01:15.519953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:01:15.519961 kernel: Serial: AMBA PL011 UART driver Jan 28 00:01:15.519969 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:01:15.519977 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:01:15.519985 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 28 00:01:15.520032 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 28 00:01:15.520046 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:01:15.520054 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:01:15.520061 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 28 00:01:15.520069 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 28 00:01:15.520077 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:01:15.520085 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:01:15.520092 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:01:15.520100 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:01:15.520110 kernel: ACPI: Interpreter enabled Jan 28 00:01:15.520118 kernel: ACPI: Using GIC for interrupt routing Jan 28 00:01:15.520126 kernel: ACPI: MCFG table detected, 1 entries Jan 28 00:01:15.520134 kernel: ACPI: CPU0 has been hot-added Jan 28 00:01:15.520142 kernel: ACPI: CPU1 has been hot-added Jan 28 00:01:15.520150 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 28 00:01:15.520158 kernel: printk: legacy console [ttyAMA0] enabled Jan 28 00:01:15.520167 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 00:01:15.520365 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 00:01:15.520457 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 00:01:15.520539 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 00:01:15.520669 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 28 00:01:15.520757 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 28 00:01:15.520771 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 28 00:01:15.520779 kernel: PCI host bridge to bus 0000:00 Jan 28 00:01:15.520871 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 28 00:01:15.520948 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 28 00:01:15.521039 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 28 00:01:15.521114 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 00:01:15.521219 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 28 00:01:15.521311 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 28 00:01:15.521398 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 28 00:01:15.521483 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 28 00:01:15.521575 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.522556 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 28 00:01:15.522759 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 28 00:01:15.523144 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 28 00:01:15.523263 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 28 00:01:15.523430 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.524035 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 28 00:01:15.524162 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 28 00:01:15.524247 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 28 00:01:15.524345 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.524427 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 28 00:01:15.524508 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 28 00:01:15.525875 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 28 00:01:15.526099 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 28 00:01:15.526204 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.526288 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 28 00:01:15.526370 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 28 00:01:15.526450 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 28 00:01:15.526540 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 28 00:01:15.526659 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.526744 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 28 00:01:15.526826 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 28 00:01:15.526907 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 28 00:01:15.526990 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 28 00:01:15.527104 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.527188 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 28 00:01:15.527268 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 28 00:01:15.527348 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 28 00:01:15.527432 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 28 00:01:15.527520 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.529714 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 28 00:01:15.529856 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 28 00:01:15.529942 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 28 00:01:15.530048 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 28 00:01:15.530153 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.530236 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 28 00:01:15.530323 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 28 00:01:15.530404 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 28 00:01:15.530523 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 28 00:01:15.530640 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 28 00:01:15.530728 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 28 00:01:15.530812 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 28 00:01:15.530902 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 28 00:01:15.530983 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 28 00:01:15.531096 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 28 00:01:15.531182 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 28 00:01:15.531266 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 28 00:01:15.531352 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 28 00:01:15.531448 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 28 00:01:15.531534 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 28 00:01:15.535763 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 28 00:01:15.535908 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 28 00:01:15.536054 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 28 00:01:15.536159 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 28 00:01:15.536243 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 28 00:01:15.536338 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 28 00:01:15.536421 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 28 00:01:15.536505 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 28 00:01:15.536623 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 28 00:01:15.536712 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 28 00:01:15.536795 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 28 00:01:15.536888 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 28 00:01:15.536972 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 28 00:01:15.537073 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 28 00:01:15.537157 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 28 00:01:15.537243 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 28 00:01:15.537325 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 28 00:01:15.537404 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 28 00:01:15.537490 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 28 00:01:15.537574 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 28 00:01:15.537724 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 28 00:01:15.537812 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 00:01:15.537894 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 28 00:01:15.538118 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 28 00:01:15.538213 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 00:01:15.538303 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 28 00:01:15.538386 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 28 00:01:15.538474 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 00:01:15.538559 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 28 00:01:15.538676 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 28 00:01:15.538775 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 00:01:15.538857 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 28 00:01:15.538936 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 28 00:01:15.539067 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 00:01:15.539157 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 28 00:01:15.539236 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 28 00:01:15.539329 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 00:01:15.539409 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 28 00:01:15.539490 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 28 00:01:15.539574 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 00:01:15.542556 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 28 00:01:15.542696 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 28 00:01:15.542788 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 28 00:01:15.542872 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 28 00:01:15.542959 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 28 00:01:15.543069 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 28 00:01:15.543158 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 28 00:01:15.543241 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 28 00:01:15.543338 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 28 00:01:15.543425 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 28 00:01:15.543510 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 28 00:01:15.543604 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 28 00:01:15.543695 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 28 00:01:15.543778 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 28 00:01:15.543864 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 28 00:01:15.543945 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 28 00:01:15.544045 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 28 00:01:15.544164 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 28 00:01:15.544256 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 28 00:01:15.544335 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 28 00:01:15.544424 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 28 00:01:15.544504 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 28 00:01:15.544587 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 28 00:01:15.547835 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 28 00:01:15.547933 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 28 00:01:15.548086 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 28 00:01:15.548230 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 28 00:01:15.548336 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 28 00:01:15.548439 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 28 00:01:15.548536 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 28 00:01:15.548676 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 28 00:01:15.548790 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 28 00:01:15.548893 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 28 00:01:15.548989 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 28 00:01:15.549116 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 28 00:01:15.549220 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 28 00:01:15.549332 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 28 00:01:15.549438 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 28 00:01:15.549559 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 28 00:01:15.549775 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 28 00:01:15.549892 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 28 00:01:15.550019 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 28 00:01:15.550127 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 28 00:01:15.550232 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 28 00:01:15.550346 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 28 00:01:15.550445 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 00:01:15.550549 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 28 00:01:15.552515 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 28 00:01:15.552709 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 28 00:01:15.552828 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 28 00:01:15.552914 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 00:01:15.553013 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 28 00:01:15.553096 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 28 00:01:15.553189 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 28 00:01:15.553274 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 28 00:01:15.553361 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 28 00:01:15.553444 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 00:01:15.553526 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 28 00:01:15.554720 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 28 00:01:15.554854 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 28 00:01:15.554968 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 28 00:01:15.555070 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 00:01:15.555155 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 28 00:01:15.555236 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 28 00:01:15.555326 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 28 00:01:15.555409 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 28 00:01:15.555506 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 28 00:01:15.555603 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 00:01:15.555691 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 28 00:01:15.555771 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 28 00:01:15.555860 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 28 00:01:15.555944 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 28 00:01:15.556092 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 28 00:01:15.556194 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 00:01:15.559171 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 28 00:01:15.559306 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 28 00:01:15.559407 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 28 00:01:15.559495 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 28 00:01:15.559611 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 28 00:01:15.559710 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 28 00:01:15.559802 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 00:01:15.559886 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 28 00:01:15.559966 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 28 00:01:15.560064 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 28 00:01:15.560147 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 00:01:15.560234 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 28 00:01:15.560324 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 28 00:01:15.560414 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 28 00:01:15.560494 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 28 00:01:15.560589 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 28 00:01:15.563382 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 28 00:01:15.563478 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 28 00:01:15.563561 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 28 00:01:15.563668 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 28 00:01:15.563761 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 00:01:15.563851 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 28 00:01:15.563929 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 28 00:01:15.564059 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 28 00:01:15.564154 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 28 00:01:15.564246 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 28 00:01:15.564339 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 28 00:01:15.564417 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 28 00:01:15.564492 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 28 00:01:15.564577 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 28 00:01:15.565875 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 28 00:01:15.565974 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 28 00:01:15.566124 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 28 00:01:15.566206 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 28 00:01:15.566310 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 28 00:01:15.566415 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 28 00:01:15.566493 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 28 00:01:15.566569 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 28 00:01:15.566676 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 28 00:01:15.566754 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 28 00:01:15.566830 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 28 00:01:15.566922 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 28 00:01:15.567015 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 28 00:01:15.567099 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 28 00:01:15.567202 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 28 00:01:15.567292 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 28 00:01:15.567378 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 28 00:01:15.567389 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 28 00:01:15.567398 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 28 00:01:15.567407 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 28 00:01:15.567415 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 28 00:01:15.567424 kernel: iommu: Default domain type: Translated Jan 28 00:01:15.567432 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 28 00:01:15.567442 kernel: efivars: Registered efivars operations Jan 28 00:01:15.567450 kernel: vgaarb: loaded Jan 28 00:01:15.567458 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 28 00:01:15.567467 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:01:15.567475 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:01:15.567483 kernel: pnp: PnP ACPI init Jan 28 00:01:15.567581 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 28 00:01:15.570473 kernel: pnp: PnP ACPI: found 1 devices Jan 28 00:01:15.570494 kernel: NET: Registered PF_INET protocol family Jan 28 00:01:15.570503 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 28 00:01:15.570512 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 28 00:01:15.570522 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:01:15.570531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:01:15.570539 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 28 00:01:15.570563 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 28 00:01:15.570572 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:01:15.570581 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 28 00:01:15.570608 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:01:15.570774 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 28 00:01:15.570788 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:01:15.570797 kernel: kvm [1]: HYP mode not available Jan 28 00:01:15.570808 kernel: Initialise system trusted keyrings Jan 28 00:01:15.570817 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 28 00:01:15.570825 kernel: Key type asymmetric registered Jan 28 00:01:15.570834 kernel: Asymmetric key parser 'x509' registered Jan 28 00:01:15.570842 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 28 00:01:15.570850 kernel: io scheduler mq-deadline registered Jan 28 00:01:15.570859 kernel: io scheduler kyber registered Jan 28 00:01:15.570869 kernel: io scheduler bfq registered Jan 28 00:01:15.570877 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 28 00:01:15.570968 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 28 00:01:15.571081 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 28 00:01:15.571167 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.571270 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 28 00:01:15.571425 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 28 00:01:15.571531 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.571637 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 28 00:01:15.571731 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 28 00:01:15.571846 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.571936 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 28 00:01:15.572078 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 28 00:01:15.572181 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.572339 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 28 00:01:15.572434 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 28 00:01:15.572518 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.572623 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 28 00:01:15.572709 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 28 00:01:15.572804 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.572903 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 28 00:01:15.572987 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 28 00:01:15.573100 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.573189 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 28 00:01:15.573283 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 28 00:01:15.573364 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.573378 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 28 00:01:15.573462 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 28 00:01:15.573545 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 28 00:01:15.573638 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:01:15.573649 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 28 00:01:15.573658 kernel: ACPI: button: Power Button [PWRB] Jan 28 00:01:15.573673 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 28 00:01:15.573766 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 28 00:01:15.573873 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 28 00:01:15.573885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:01:15.573894 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 28 00:01:15.574008 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 28 00:01:15.574021 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 28 00:01:15.574038 kernel: thunder_xcv, ver 1.0 Jan 28 00:01:15.574047 kernel: thunder_bgx, ver 1.0 Jan 28 00:01:15.574055 kernel: nicpf, ver 1.0 Jan 28 00:01:15.574063 kernel: nicvf, ver 1.0 Jan 28 00:01:15.574182 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 28 00:01:15.574264 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-28T00:01:14 UTC (1769558474) Jan 28 00:01:15.574274 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 00:01:15.574285 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 28 00:01:15.574294 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:01:15.574302 kernel: watchdog: NMI not fully supported Jan 28 00:01:15.574310 kernel: watchdog: Hard watchdog permanently disabled Jan 28 00:01:15.574319 kernel: Segment Routing with IPv6 Jan 28 00:01:15.574327 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:01:15.574335 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:01:15.574345 kernel: Key type dns_resolver registered Jan 28 00:01:15.574354 kernel: registered taskstats version 1 Jan 28 00:01:15.574370 kernel: Loading compiled-in X.509 certificates Jan 28 00:01:15.574379 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 9b9d0a6e8555c4a74bcb93286e875e2244e1db21' Jan 28 00:01:15.574387 kernel: Demotion targets for Node 0: null Jan 28 00:01:15.574395 kernel: Key type .fscrypt registered Jan 28 00:01:15.574402 kernel: Key type fscrypt-provisioning registered Jan 28 00:01:15.574412 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:01:15.574421 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:01:15.574429 kernel: ima: No architecture policies found Jan 28 00:01:15.574437 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 28 00:01:15.574446 kernel: clk: Disabling unused clocks Jan 28 00:01:15.574454 kernel: PM: genpd: Disabling unused power domains Jan 28 00:01:15.574462 kernel: Freeing unused kernel memory: 12480K Jan 28 00:01:15.574472 kernel: Run /init as init process Jan 28 00:01:15.574480 kernel: with arguments: Jan 28 00:01:15.574488 kernel: /init Jan 28 00:01:15.574496 kernel: with environment: Jan 28 00:01:15.574504 kernel: HOME=/ Jan 28 00:01:15.574512 kernel: TERM=linux Jan 28 00:01:15.574520 kernel: ACPI: bus type USB registered Jan 28 00:01:15.574528 kernel: usbcore: registered new interface driver usbfs Jan 28 00:01:15.574538 kernel: usbcore: registered new interface driver hub Jan 28 00:01:15.574546 kernel: usbcore: registered new device driver usb Jan 28 00:01:15.574684 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 28 00:01:15.574771 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 28 00:01:15.574867 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 00:01:15.574959 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 28 00:01:15.575059 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 28 00:01:15.575151 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 28 00:01:15.575266 kernel: hub 1-0:1.0: USB hub found Jan 28 00:01:15.575355 kernel: hub 1-0:1.0: 4 ports detected Jan 28 00:01:15.575459 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 00:01:15.575567 kernel: hub 2-0:1.0: USB hub found Jan 28 00:01:15.575688 kernel: hub 2-0:1.0: 4 ports detected Jan 28 00:01:15.575700 kernel: SCSI subsystem initialized Jan 28 00:01:15.575798 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 28 00:01:15.575901 kernel: scsi host0: Virtio SCSI HBA Jan 28 00:01:15.576062 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 28 00:01:15.576211 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 28 00:01:15.576318 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 28 00:01:15.576409 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 28 00:01:15.576501 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 28 00:01:15.576678 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 28 00:01:15.576802 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 28 00:01:15.576814 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 28 00:01:15.576901 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 28 00:01:15.577027 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 28 00:01:15.577123 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 28 00:01:15.577133 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 00:01:15.577145 kernel: GPT:25804799 != 80003071 Jan 28 00:01:15.577153 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 00:01:15.577162 kernel: GPT:25804799 != 80003071 Jan 28 00:01:15.577170 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:01:15.577177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 28 00:01:15.577276 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 28 00:01:15.577287 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:01:15.577297 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:01:15.577306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 28 00:01:15.577315 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 28 00:01:15.577323 kernel: raid6: neonx8 gen() 15691 MB/s Jan 28 00:01:15.577331 kernel: raid6: neonx4 gen() 12285 MB/s Jan 28 00:01:15.577339 kernel: raid6: neonx2 gen() 13072 MB/s Jan 28 00:01:15.577347 kernel: raid6: neonx1 gen() 10239 MB/s Jan 28 00:01:15.577355 kernel: raid6: int64x8 gen() 6780 MB/s Jan 28 00:01:15.577365 kernel: raid6: int64x4 gen() 7287 MB/s Jan 28 00:01:15.577492 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 00:01:15.577506 kernel: raid6: int64x2 gen() 5490 MB/s Jan 28 00:01:15.577514 kernel: raid6: int64x1 gen() 3854 MB/s Jan 28 00:01:15.577523 kernel: raid6: using algorithm neonx8 gen() 15691 MB/s Jan 28 00:01:15.577531 kernel: raid6: .... xor() 11815 MB/s, rmw enabled Jan 28 00:01:15.577542 kernel: raid6: using neon recovery algorithm Jan 28 00:01:15.577550 kernel: xor: measuring software checksum speed Jan 28 00:01:15.577558 kernel: 8regs : 21636 MB/sec Jan 28 00:01:15.577566 kernel: 32regs : 21699 MB/sec Jan 28 00:01:15.577575 kernel: arm64_neon : 28118 MB/sec Jan 28 00:01:15.577583 kernel: xor: using function: arm64_neon (28118 MB/sec) Jan 28 00:01:15.577650 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:01:15.577664 kernel: BTRFS: device fsid f7176ebb-63b5-458d-bfa0-a0dcd6bb053d devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (213) Jan 28 00:01:15.577674 kernel: BTRFS info (device dm-0): first mount of filesystem f7176ebb-63b5-458d-bfa0-a0dcd6bb053d Jan 28 00:01:15.577682 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:01:15.577690 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 28 00:01:15.577699 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:01:15.577707 kernel: BTRFS info (device dm-0): enabling free space tree Jan 28 00:01:15.577715 kernel: loop: module loaded Jan 28 00:01:15.577724 kernel: loop0: detected capacity change from 0 to 91832 Jan 28 00:01:15.577734 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:01:15.577900 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 28 00:01:15.577916 systemd[1]: Successfully made /usr/ read-only. Jan 28 00:01:15.577927 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:01:15.577936 systemd[1]: Detected virtualization kvm. Jan 28 00:01:15.577949 systemd[1]: Detected architecture arm64. Jan 28 00:01:15.577958 systemd[1]: Running in initrd. Jan 28 00:01:15.577966 systemd[1]: No hostname configured, using default hostname. Jan 28 00:01:15.577975 systemd[1]: Hostname set to . Jan 28 00:01:15.577983 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 00:01:15.578030 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:01:15.578047 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:01:15.578056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:01:15.578065 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:01:15.578075 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:01:15.578084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:01:15.578093 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:01:15.578106 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:01:15.578121 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:01:15.578130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:01:15.578139 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:01:15.578148 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:01:15.578157 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:01:15.578168 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:01:15.578176 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:01:15.578185 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:01:15.578194 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:01:15.578203 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 00:01:15.578216 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:01:15.578225 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 28 00:01:15.578236 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:01:15.578245 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:01:15.578254 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:01:15.578263 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:01:15.578272 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:01:15.578288 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:01:15.578297 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:01:15.578308 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:01:15.578318 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 28 00:01:15.578327 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:01:15.578336 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:01:15.578345 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:01:15.578356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:01:15.578365 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:01:15.578374 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:01:15.578382 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:01:15.578391 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:01:15.578433 systemd-journald[350]: Collecting audit messages is enabled. Jan 28 00:01:15.578521 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:01:15.578533 kernel: Bridge firewalling registered Jan 28 00:01:15.578546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:01:15.578555 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:01:15.578564 kernel: audit: type=1130 audit(1769558475.537:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.578573 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:01:15.578582 kernel: audit: type=1130 audit(1769558475.553:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.578645 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:01:15.578657 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:01:15.578695 kernel: audit: type=1130 audit(1769558475.570:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.578705 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:01:15.578714 kernel: audit: type=1130 audit(1769558475.575:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.578725 systemd-journald[350]: Journal started Jan 28 00:01:15.578746 systemd-journald[350]: Runtime Journal (/run/log/journal/c4de30a6f2ab41e2a3c20f576293fb3e) is 8M, max 76.5M, 68.5M free. Jan 28 00:01:15.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.535787 systemd-modules-load[352]: Inserted module 'br_netfilter' Jan 28 00:01:15.589630 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:01:15.591000 audit: BPF prog-id=6 op=LOAD Jan 28 00:01:15.593629 kernel: audit: type=1334 audit(1769558475.591:6): prog-id=6 op=LOAD Jan 28 00:01:15.597686 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:01:15.599932 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:01:15.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.602659 kernel: audit: type=1130 audit(1769558475.599:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.605554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:01:15.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.613988 kernel: audit: type=1130 audit(1769558475.607:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.618908 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:01:15.640045 systemd-tmpfiles[381]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 28 00:01:15.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.641879 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:01:15.647635 kernel: audit: type=1130 audit(1769558475.642:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.650949 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:01:15.658789 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:01:15.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.663403 systemd-resolved[367]: Positive Trust Anchors: Jan 28 00:01:15.663416 systemd-resolved[367]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:01:15.663419 systemd-resolved[367]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 00:01:15.663451 systemd-resolved[367]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:01:15.676392 kernel: audit: type=1130 audit(1769558475.662:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.694501 dracut-cmdline[389]: dracut-109 Jan 28 00:01:15.696377 systemd-resolved[367]: Defaulting to hostname 'linux'. Jan 28 00:01:15.697976 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:01:15.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.699902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:01:15.703344 dracut-cmdline[389]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=880c7a57ca1a4cf41361128ef304e12abcda0ba85f8697ad932e9820a1865169 Jan 28 00:01:15.817656 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:01:15.828648 kernel: iscsi: registered transport (tcp) Jan 28 00:01:15.844684 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:01:15.844776 kernel: QLogic iSCSI HBA Driver Jan 28 00:01:15.878029 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:01:15.905231 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:01:15.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.906809 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:01:15.972937 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:01:15.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:15.975524 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:01:15.977309 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:01:16.023071 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:01:16.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.025000 audit: BPF prog-id=7 op=LOAD Jan 28 00:01:16.025000 audit: BPF prog-id=8 op=LOAD Jan 28 00:01:16.029072 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:01:16.070293 systemd-udevd[615]: Using default interface naming scheme 'v257'. Jan 28 00:01:16.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.081439 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:01:16.087785 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:01:16.121357 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Jan 28 00:01:16.156687 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:01:16.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.160000 audit: BPF prog-id=9 op=LOAD Jan 28 00:01:16.161526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:01:16.168675 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:01:16.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.170788 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:01:16.222054 systemd-networkd[755]: lo: Link UP Jan 28 00:01:16.222707 systemd-networkd[755]: lo: Gained carrier Jan 28 00:01:16.224192 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:01:16.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.225793 systemd[1]: Reached target network.target - Network. Jan 28 00:01:16.252331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:01:16.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.257358 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:01:16.440542 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 28 00:01:16.462620 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 28 00:01:16.462686 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 28 00:01:16.471341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 28 00:01:16.473618 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 28 00:01:16.492167 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 28 00:01:16.515234 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 28 00:01:16.520889 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:01:16.528613 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 28 00:01:16.530286 kernel: usbcore: registered new interface driver usbhid Jan 28 00:01:16.530347 kernel: usbhid: USB HID core driver Jan 28 00:01:16.544368 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:01:16.546674 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:01:16.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.549098 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:01:16.553618 disk-uuid[806]: Primary Header is updated. Jan 28 00:01:16.553618 disk-uuid[806]: Secondary Entries is updated. Jan 28 00:01:16.553618 disk-uuid[806]: Secondary Header is updated. Jan 28 00:01:16.556536 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:01:16.562913 systemd-networkd[755]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:16.562921 systemd-networkd[755]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:01:16.566498 systemd-networkd[755]: eth0: Link UP Jan 28 00:01:16.568482 systemd-networkd[755]: eth0: Gained carrier Jan 28 00:01:16.568496 systemd-networkd[755]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:16.577735 systemd-networkd[755]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:16.577756 systemd-networkd[755]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:01:16.579939 systemd-networkd[755]: eth1: Link UP Jan 28 00:01:16.580210 systemd-networkd[755]: eth1: Gained carrier Jan 28 00:01:16.580230 systemd-networkd[755]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:16.630783 systemd-networkd[755]: eth0: DHCPv4 address 159.69.123.112/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 28 00:01:16.637728 systemd-networkd[755]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 28 00:01:16.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.647807 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:01:16.734442 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:01:16.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:16.737653 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:01:16.739509 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:01:16.741219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:01:16.743820 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:01:16.775202 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:01:16.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.614633 disk-uuid[808]: Warning: The kernel is still using the old partition table. Jan 28 00:01:17.614633 disk-uuid[808]: The new table will be used at the next reboot or after you Jan 28 00:01:17.614633 disk-uuid[808]: run partprobe(8) or kpartx(8) Jan 28 00:01:17.614633 disk-uuid[808]: The operation has completed successfully. Jan 28 00:01:17.626922 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:01:17.627184 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:01:17.627000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.627000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.629580 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:01:17.679613 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (837) Jan 28 00:01:17.681271 kernel: BTRFS info (device sda6): first mount of filesystem 7e41befc-1b7e-4b8d-988a-42f0dc79ed16 Jan 28 00:01:17.681323 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:01:17.686908 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 28 00:01:17.686988 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:01:17.687011 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:01:17.694611 kernel: BTRFS info (device sda6): last unmount of filesystem 7e41befc-1b7e-4b8d-988a-42f0dc79ed16 Jan 28 00:01:17.695558 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:01:17.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.697768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:01:17.855929 ignition[856]: Ignition 2.24.0 Jan 28 00:01:17.855950 ignition[856]: Stage: fetch-offline Jan 28 00:01:17.856007 ignition[856]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:17.856017 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:17.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.858911 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:01:17.856198 ignition[856]: parsed url from cmdline: "" Jan 28 00:01:17.861739 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 00:01:17.856201 ignition[856]: no config URL provided Jan 28 00:01:17.856206 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:01:17.856214 ignition[856]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:01:17.856220 ignition[856]: failed to fetch config: resource requires networking Jan 28 00:01:17.856549 ignition[856]: Ignition finished successfully Jan 28 00:01:17.891120 ignition[863]: Ignition 2.24.0 Jan 28 00:01:17.891136 ignition[863]: Stage: fetch Jan 28 00:01:17.891317 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:17.891326 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:17.891426 ignition[863]: parsed url from cmdline: "" Jan 28 00:01:17.891430 ignition[863]: no config URL provided Jan 28 00:01:17.891442 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:01:17.891448 ignition[863]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:01:17.891485 ignition[863]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 28 00:01:17.897756 ignition[863]: GET result: OK Jan 28 00:01:17.897857 ignition[863]: parsing config with SHA512: 2e9a457bb6df499478a7bbc581588d433339c30eb04e57568e9ab816a8edc2eae23301215b3b6af303a149d992984bd7122574505a0821286e4e7ccc85814304 Jan 28 00:01:17.910051 unknown[863]: fetched base config from "system" Jan 28 00:01:17.910063 unknown[863]: fetched base config from "system" Jan 28 00:01:17.911103 ignition[863]: fetch: fetch complete Jan 28 00:01:17.910068 unknown[863]: fetched user config from "hetzner" Jan 28 00:01:17.911110 ignition[863]: fetch: fetch passed Jan 28 00:01:17.911176 ignition[863]: Ignition finished successfully Jan 28 00:01:17.915419 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 00:01:17.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.919327 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:01:17.950745 ignition[869]: Ignition 2.24.0 Jan 28 00:01:17.950767 ignition[869]: Stage: kargs Jan 28 00:01:17.950941 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:17.950950 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:17.951878 ignition[869]: kargs: kargs passed Jan 28 00:01:17.951928 ignition[869]: Ignition finished successfully Jan 28 00:01:17.955012 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:01:17.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:17.957458 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:01:18.005818 ignition[876]: Ignition 2.24.0 Jan 28 00:01:18.005833 ignition[876]: Stage: disks Jan 28 00:01:18.006057 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:18.006066 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:18.009394 ignition[876]: disks: disks passed Jan 28 00:01:18.009462 ignition[876]: Ignition finished successfully Jan 28 00:01:18.012549 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:01:18.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.013878 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:01:18.015028 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:01:18.016680 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:01:18.018147 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:01:18.019435 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:01:18.021687 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:01:18.058549 systemd-fsck[884]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 28 00:01:18.064069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:01:18.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.068468 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:01:18.074746 systemd-networkd[755]: eth0: Gained IPv6LL Jan 28 00:01:18.150829 kernel: EXT4-fs (sda9): mounted filesystem e122e254-04a8-47c4-9c16-e71d001bbc70 r/w with ordered data mode. Quota mode: none. Jan 28 00:01:18.152812 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:01:18.155308 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:01:18.159128 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:01:18.160839 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:01:18.165972 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 28 00:01:18.167788 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:01:18.167835 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:01:18.184017 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:01:18.188377 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:01:18.194796 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (892) Jan 28 00:01:18.201450 kernel: BTRFS info (device sda6): first mount of filesystem 7e41befc-1b7e-4b8d-988a-42f0dc79ed16 Jan 28 00:01:18.201521 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:01:18.210745 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 28 00:01:18.210820 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:01:18.210877 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:01:18.213659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:01:18.259396 coreos-metadata[894]: Jan 28 00:01:18.259 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 28 00:01:18.262714 coreos-metadata[894]: Jan 28 00:01:18.262 INFO Fetch successful Jan 28 00:01:18.265742 coreos-metadata[894]: Jan 28 00:01:18.265 INFO wrote hostname ci-4593-0-0-n-20383d5ef7 to /sysroot/etc/hostname Jan 28 00:01:18.268056 systemd-networkd[755]: eth1: Gained IPv6LL Jan 28 00:01:18.269279 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:01:18.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.276525 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 28 00:01:18.276610 kernel: audit: type=1130 audit(1769558478.273:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.418465 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:01:18.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.423982 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:01:18.426629 kernel: audit: type=1130 audit(1769558478.419:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.427628 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:01:18.445717 kernel: BTRFS info (device sda6): last unmount of filesystem 7e41befc-1b7e-4b8d-988a-42f0dc79ed16 Jan 28 00:01:18.472198 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:01:18.476435 kernel: audit: type=1130 audit(1769558478.472:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.482507 ignition[993]: INFO : Ignition 2.24.0 Jan 28 00:01:18.482507 ignition[993]: INFO : Stage: mount Jan 28 00:01:18.484961 ignition[993]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:18.484961 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:18.484961 ignition[993]: INFO : mount: mount passed Jan 28 00:01:18.484961 ignition[993]: INFO : Ignition finished successfully Jan 28 00:01:18.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.487117 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:01:18.494441 kernel: audit: type=1130 audit(1769558478.490:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:18.494865 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:01:18.666201 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:01:18.671034 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:01:18.716131 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1004) Jan 28 00:01:18.716217 kernel: BTRFS info (device sda6): first mount of filesystem 7e41befc-1b7e-4b8d-988a-42f0dc79ed16 Jan 28 00:01:18.717147 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 28 00:01:18.721990 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 28 00:01:18.722074 kernel: BTRFS info (device sda6): turning on async discard Jan 28 00:01:18.722089 kernel: BTRFS info (device sda6): enabling free space tree Jan 28 00:01:18.724964 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:01:18.761286 ignition[1021]: INFO : Ignition 2.24.0 Jan 28 00:01:18.762162 ignition[1021]: INFO : Stage: files Jan 28 00:01:18.762558 ignition[1021]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:18.763285 ignition[1021]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:18.764543 ignition[1021]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:01:18.768105 ignition[1021]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:01:18.768105 ignition[1021]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:01:18.779012 ignition[1021]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:01:18.781215 ignition[1021]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:01:18.783232 unknown[1021]: wrote ssh authorized keys file for user: core Jan 28 00:01:18.785614 ignition[1021]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:01:18.789711 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:01:18.791200 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 28 00:01:18.852133 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:01:18.943906 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 28 00:01:18.943906 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:01:18.948581 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:01:18.960663 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:01:18.960663 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:01:18.960663 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 28 00:01:19.280460 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 00:01:19.860966 ignition[1021]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 28 00:01:19.860966 ignition[1021]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 00:01:19.864353 ignition[1021]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:01:19.867225 ignition[1021]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:01:19.867225 ignition[1021]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 00:01:19.867225 ignition[1021]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:01:19.875413 ignition[1021]: INFO : files: files passed Jan 28 00:01:19.875413 ignition[1021]: INFO : Ignition finished successfully Jan 28 00:01:19.891827 kernel: audit: type=1130 audit(1769558479.876:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.872467 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:01:19.881331 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:01:19.887485 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:01:19.907813 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:01:19.908465 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:01:19.914079 kernel: audit: type=1130 audit(1769558479.909:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.914133 kernel: audit: type=1131 audit(1769558479.910:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.910000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.924034 initrd-setup-root-after-ignition[1053]: grep: Jan 28 00:01:19.925781 initrd-setup-root-after-ignition[1057]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:01:19.927235 initrd-setup-root-after-ignition[1053]: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:01:19.927235 initrd-setup-root-after-ignition[1053]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:01:19.931719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:01:19.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:19.935489 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:01:19.937479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:01:19.941409 kernel: audit: type=1130 audit(1769558479.932:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.011127 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:01:20.011340 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:01:20.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.017974 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:01:20.017000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.018721 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:01:20.021793 kernel: audit: type=1130 audit(1769558480.013:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.021821 kernel: audit: type=1131 audit(1769558480.017:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.021468 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:01:20.022568 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:01:20.065460 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:01:20.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.070146 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:01:20.091422 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:01:20.092654 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:01:20.093456 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:01:20.094812 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:01:20.095870 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:01:20.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.096085 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:01:20.097462 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:01:20.098807 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:01:20.099751 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:01:20.100963 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:01:20.102294 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:01:20.103531 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 28 00:01:20.104686 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:01:20.105836 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:01:20.107320 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:01:20.108646 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:01:20.109638 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:01:20.110556 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:01:20.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.110765 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:01:20.114108 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:01:20.115466 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:01:20.116493 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:01:20.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.116949 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:01:20.117866 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:01:20.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.118134 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:01:20.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.119752 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:01:20.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.120021 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:01:20.121587 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:01:20.121832 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:01:20.122670 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 28 00:01:20.122844 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 28 00:01:20.126914 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:01:20.127799 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:01:20.129714 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:01:20.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.135855 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:01:20.137130 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:01:20.137988 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:01:20.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.140153 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:01:20.140990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:01:20.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.142805 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:01:20.143738 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:01:20.144000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.150693 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:01:20.151526 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:01:20.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.166620 ignition[1077]: INFO : Ignition 2.24.0 Jan 28 00:01:20.166620 ignition[1077]: INFO : Stage: umount Jan 28 00:01:20.170467 ignition[1077]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:01:20.170467 ignition[1077]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 28 00:01:20.170467 ignition[1077]: INFO : umount: umount passed Jan 28 00:01:20.170467 ignition[1077]: INFO : Ignition finished successfully Jan 28 00:01:20.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.171346 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:01:20.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.171494 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:01:20.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.173785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:01:20.174920 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:01:20.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.175051 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:01:20.177614 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:01:20.177755 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:01:20.183385 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 00:01:20.183465 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 00:01:20.184857 systemd[1]: Stopped target network.target - Network. Jan 28 00:01:20.186503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:01:20.186711 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:01:20.207000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.187912 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:01:20.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.189042 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:01:20.192753 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:01:20.197427 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:01:20.200615 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:01:20.202843 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:01:20.202953 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:01:20.204032 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:01:20.204076 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:01:20.205223 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 28 00:01:20.205284 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 28 00:01:20.206620 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:01:20.206700 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:01:20.207917 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:01:20.207981 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:01:20.209705 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:01:20.211337 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:01:20.225184 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:01:20.226034 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:01:20.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.230240 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:01:20.230474 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:01:20.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.236264 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:01:20.236412 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:01:20.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.240000 audit: BPF prog-id=6 op=UNLOAD Jan 28 00:01:20.240000 audit: BPF prog-id=9 op=UNLOAD Jan 28 00:01:20.241798 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 28 00:01:20.243544 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:01:20.244324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:01:20.245340 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:01:20.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.245424 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:01:20.248413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:01:20.250787 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:01:20.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.250918 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:01:20.253811 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:01:20.254008 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:01:20.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.258284 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:01:20.258361 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:01:20.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.262157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:01:20.291632 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:01:20.291904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:01:20.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.295876 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:01:20.296013 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:01:20.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.299234 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:01:20.299372 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:01:20.300835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:01:20.300880 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:01:20.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.301977 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:01:20.302058 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:01:20.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.303789 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:01:20.303854 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:01:20.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.305718 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:01:20.305789 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:01:20.308535 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:01:20.310280 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 28 00:01:20.312000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.310362 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:01:20.315000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.313464 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:01:20.313546 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:01:20.316550 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:01:20.316659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:01:20.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.340323 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:01:20.340585 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:01:20.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:20.343582 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:01:20.347239 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:01:20.369123 systemd[1]: Switching root. Jan 28 00:01:20.418676 systemd-journald[350]: Journal stopped Jan 28 00:01:21.579668 systemd-journald[350]: Received SIGTERM from PID 1 (systemd). Jan 28 00:01:21.579750 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:01:21.579767 kernel: SELinux: policy capability open_perms=1 Jan 28 00:01:21.579778 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:01:21.579789 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:01:21.579803 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:01:21.579821 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:01:21.579835 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:01:21.579845 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:01:21.579855 kernel: SELinux: policy capability userspace_initial_context=0 Jan 28 00:01:21.579871 systemd[1]: Successfully loaded SELinux policy in 63.951ms. Jan 28 00:01:21.579894 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.835ms. Jan 28 00:01:21.579921 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 28 00:01:21.579938 systemd[1]: Detected virtualization kvm. Jan 28 00:01:21.579950 systemd[1]: Detected architecture arm64. Jan 28 00:01:21.579961 systemd[1]: Detected first boot. Jan 28 00:01:21.579972 systemd[1]: Hostname set to . Jan 28 00:01:21.579987 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 28 00:01:21.580000 zram_generator::config[1120]: No configuration found. Jan 28 00:01:21.580020 kernel: NET: Registered PF_VSOCK protocol family Jan 28 00:01:21.580032 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:01:21.580043 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:01:21.580055 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:01:21.580066 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:01:21.580079 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:01:21.580094 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:01:21.580111 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:01:21.580123 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:01:21.580137 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:01:21.580149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:01:21.580160 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:01:21.580171 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:01:21.580185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:01:21.580201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:01:21.580214 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:01:21.580229 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:01:21.580244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:01:21.580258 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:01:21.580272 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 28 00:01:21.580286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:01:21.580300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:01:21.580311 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:01:21.580322 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:01:21.580333 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:01:21.580347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:01:21.580362 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:01:21.580376 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:01:21.580387 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 28 00:01:21.580398 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:01:21.580411 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:01:21.580422 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:01:21.580438 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:01:21.580454 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 28 00:01:21.580467 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 28 00:01:21.580478 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 28 00:01:21.580489 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:01:21.580500 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 28 00:01:21.580511 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 28 00:01:21.580522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:01:21.580535 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:01:21.580547 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:01:21.580557 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:01:21.580568 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:01:21.580583 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:01:21.581140 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:01:21.581166 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:01:21.581185 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:01:21.581198 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:01:21.581209 systemd[1]: Reached target machines.target - Containers. Jan 28 00:01:21.581220 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:01:21.581236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:01:21.581248 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:01:21.581259 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:01:21.581271 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:01:21.581283 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:01:21.581295 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:01:21.581306 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:01:21.581319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:01:21.581331 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:01:21.581342 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:01:21.581354 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:01:21.581365 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:01:21.581377 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:01:21.581392 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:01:21.581404 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:01:21.581415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:01:21.581429 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:01:21.581441 kernel: ACPI: bus type drm_connector registered Jan 28 00:01:21.581453 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:01:21.581466 kernel: fuse: init (API version 7.41) Jan 28 00:01:21.581476 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 28 00:01:21.581488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:01:21.581502 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:01:21.581516 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:01:21.581528 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:01:21.581542 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:01:21.581558 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:01:21.581571 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:01:21.581583 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:01:21.581632 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:01:21.581646 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:01:21.581660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:01:21.581675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:01:21.581688 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:01:21.581703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:01:21.581718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:01:21.581731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:01:21.581745 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:01:21.581758 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:01:21.581769 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:01:21.581782 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:01:21.581794 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:01:21.581807 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 28 00:01:21.581819 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:01:21.581830 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:01:21.581845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:01:21.581860 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:01:21.581876 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 28 00:01:21.581891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:01:21.581920 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:01:21.581936 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:01:21.581949 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:01:21.581964 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:01:21.582019 systemd-journald[1183]: Collecting audit messages is enabled. Jan 28 00:01:21.583708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:01:21.583754 systemd-journald[1183]: Journal started Jan 28 00:01:21.583795 systemd-journald[1183]: Runtime Journal (/run/log/journal/c4de30a6f2ab41e2a3c20f576293fb3e) is 8M, max 76.5M, 68.5M free. Jan 28 00:01:21.260000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 28 00:01:21.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.393000 audit: BPF prog-id=14 op=UNLOAD Jan 28 00:01:21.393000 audit: BPF prog-id=13 op=UNLOAD Jan 28 00:01:21.394000 audit: BPF prog-id=15 op=LOAD Jan 28 00:01:21.394000 audit: BPF prog-id=16 op=LOAD Jan 28 00:01:21.394000 audit: BPF prog-id=17 op=LOAD Jan 28 00:01:21.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.575000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 28 00:01:21.575000 audit[1183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffffe6108a0 a2=4000 a3=0 items=0 ppid=1 pid=1183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:21.575000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 28 00:01:21.175395 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:01:21.202613 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 28 00:01:21.203199 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:01:21.590313 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:01:21.595676 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:01:21.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.597603 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:01:21.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.602784 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:01:21.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.607758 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 28 00:01:21.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.610054 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:01:21.614616 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:01:21.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.616984 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:01:21.649691 kernel: loop1: detected capacity change from 0 to 100192 Jan 28 00:01:21.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.642300 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:01:21.651138 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:01:21.653941 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:01:21.657847 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:01:21.662801 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 28 00:01:21.666630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:01:21.696637 kernel: loop2: detected capacity change from 0 to 45344 Jan 28 00:01:21.701498 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:01:21.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.709749 systemd-journald[1183]: Time spent on flushing to /var/log/journal/c4de30a6f2ab41e2a3c20f576293fb3e is 27.049ms for 1299 entries. Jan 28 00:01:21.709749 systemd-journald[1183]: System Journal (/var/log/journal/c4de30a6f2ab41e2a3c20f576293fb3e) is 8M, max 588.1M, 580.1M free. Jan 28 00:01:21.764725 systemd-journald[1183]: Received client request to flush runtime journal. Jan 28 00:01:21.764809 kernel: loop3: detected capacity change from 0 to 207008 Jan 28 00:01:21.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.708847 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:01:21.711384 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 28 00:01:21.719827 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:01:21.769649 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:01:21.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.795694 kernel: loop4: detected capacity change from 0 to 8 Jan 28 00:01:21.799843 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:01:21.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.802000 audit: BPF prog-id=18 op=LOAD Jan 28 00:01:21.802000 audit: BPF prog-id=19 op=LOAD Jan 28 00:01:21.802000 audit: BPF prog-id=20 op=LOAD Jan 28 00:01:21.806574 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 28 00:01:21.807000 audit: BPF prog-id=21 op=LOAD Jan 28 00:01:21.812087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:01:21.816713 kernel: loop5: detected capacity change from 0 to 100192 Jan 28 00:01:21.817147 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:01:21.826000 audit: BPF prog-id=22 op=LOAD Jan 28 00:01:21.826000 audit: BPF prog-id=23 op=LOAD Jan 28 00:01:21.826000 audit: BPF prog-id=24 op=LOAD Jan 28 00:01:21.830161 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 28 00:01:21.832000 audit: BPF prog-id=25 op=LOAD Jan 28 00:01:21.832000 audit: BPF prog-id=26 op=LOAD Jan 28 00:01:21.832000 audit: BPF prog-id=27 op=LOAD Jan 28 00:01:21.836202 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:01:21.846626 kernel: loop6: detected capacity change from 0 to 45344 Jan 28 00:01:21.867633 kernel: loop7: detected capacity change from 0 to 207008 Jan 28 00:01:21.887236 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 28 00:01:21.887263 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jan 28 00:01:21.898985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:01:21.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:21.903657 kernel: loop1: detected capacity change from 0 to 8 Jan 28 00:01:21.905785 (sd-merge)[1265]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 28 00:01:21.919861 (sd-merge)[1265]: Merged extensions into '/usr'. Jan 28 00:01:21.930805 systemd[1]: Reload requested from client PID 1209 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:01:21.930828 systemd[1]: Reloading... Jan 28 00:01:21.944094 systemd-nsresourced[1268]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 28 00:01:22.125655 zram_generator::config[1316]: No configuration found. Jan 28 00:01:22.220719 systemd-oomd[1264]: No swap; memory pressure usage will be degraded Jan 28 00:01:22.238353 systemd-resolved[1266]: Positive Trust Anchors: Jan 28 00:01:22.241112 systemd-resolved[1266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:01:22.241120 systemd-resolved[1266]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 28 00:01:22.241160 systemd-resolved[1266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:01:22.264033 systemd-resolved[1266]: Using system hostname 'ci-4593-0-0-n-20383d5ef7'. Jan 28 00:01:22.376560 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:01:22.377139 systemd[1]: Reloading finished in 445 ms. Jan 28 00:01:22.394658 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 28 00:01:22.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.395851 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:01:22.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.396946 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 28 00:01:22.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.397915 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:01:22.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.399093 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:01:22.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.400353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:01:22.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.405101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:01:22.419976 systemd[1]: Starting ensure-sysext.service... Jan 28 00:01:22.423778 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:01:22.424000 audit: BPF prog-id=8 op=UNLOAD Jan 28 00:01:22.424000 audit: BPF prog-id=7 op=UNLOAD Jan 28 00:01:22.425000 audit: BPF prog-id=28 op=LOAD Jan 28 00:01:22.425000 audit: BPF prog-id=29 op=LOAD Jan 28 00:01:22.429000 audit: BPF prog-id=30 op=LOAD Jan 28 00:01:22.430000 audit: BPF prog-id=18 op=UNLOAD Jan 28 00:01:22.430000 audit: BPF prog-id=31 op=LOAD Jan 28 00:01:22.430000 audit: BPF prog-id=32 op=LOAD Jan 28 00:01:22.430000 audit: BPF prog-id=19 op=UNLOAD Jan 28 00:01:22.430000 audit: BPF prog-id=20 op=UNLOAD Jan 28 00:01:22.430000 audit: BPF prog-id=33 op=LOAD Jan 28 00:01:22.430000 audit: BPF prog-id=15 op=UNLOAD Jan 28 00:01:22.431000 audit: BPF prog-id=34 op=LOAD Jan 28 00:01:22.431000 audit: BPF prog-id=35 op=LOAD Jan 28 00:01:22.431000 audit: BPF prog-id=16 op=UNLOAD Jan 28 00:01:22.431000 audit: BPF prog-id=17 op=UNLOAD Jan 28 00:01:22.431000 audit: BPF prog-id=36 op=LOAD Jan 28 00:01:22.431000 audit: BPF prog-id=22 op=UNLOAD Jan 28 00:01:22.431000 audit: BPF prog-id=37 op=LOAD Jan 28 00:01:22.431000 audit: BPF prog-id=38 op=LOAD Jan 28 00:01:22.431000 audit: BPF prog-id=23 op=UNLOAD Jan 28 00:01:22.431000 audit: BPF prog-id=24 op=UNLOAD Jan 28 00:01:22.428845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:01:22.438000 audit: BPF prog-id=39 op=LOAD Jan 28 00:01:22.438000 audit: BPF prog-id=25 op=UNLOAD Jan 28 00:01:22.438000 audit: BPF prog-id=40 op=LOAD Jan 28 00:01:22.438000 audit: BPF prog-id=41 op=LOAD Jan 28 00:01:22.438000 audit: BPF prog-id=26 op=UNLOAD Jan 28 00:01:22.438000 audit: BPF prog-id=27 op=UNLOAD Jan 28 00:01:22.440000 audit: BPF prog-id=42 op=LOAD Jan 28 00:01:22.440000 audit: BPF prog-id=21 op=UNLOAD Jan 28 00:01:22.453939 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:01:22.453959 systemd[1]: Reloading... Jan 28 00:01:22.475271 systemd-udevd[1352]: Using default interface naming scheme 'v257'. Jan 28 00:01:22.476975 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 28 00:01:22.477015 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 28 00:01:22.477243 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:01:22.482044 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 28 00:01:22.482101 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 28 00:01:22.493015 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:01:22.493031 systemd-tmpfiles[1351]: Skipping /boot Jan 28 00:01:22.509806 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:01:22.510297 systemd-tmpfiles[1351]: Skipping /boot Jan 28 00:01:22.561636 zram_generator::config[1391]: No configuration found. Jan 28 00:01:22.780655 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:01:22.859705 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 28 00:01:22.860065 systemd[1]: Reloading finished in 405 ms. Jan 28 00:01:22.877863 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:01:22.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.890340 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:01:22.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:22.898000 audit: BPF prog-id=43 op=LOAD Jan 28 00:01:22.900000 audit: BPF prog-id=30 op=UNLOAD Jan 28 00:01:22.900000 audit: BPF prog-id=44 op=LOAD Jan 28 00:01:22.900000 audit: BPF prog-id=45 op=LOAD Jan 28 00:01:22.900000 audit: BPF prog-id=31 op=UNLOAD Jan 28 00:01:22.900000 audit: BPF prog-id=32 op=UNLOAD Jan 28 00:01:22.901000 audit: BPF prog-id=46 op=LOAD Jan 28 00:01:22.901000 audit: BPF prog-id=47 op=LOAD Jan 28 00:01:22.901000 audit: BPF prog-id=28 op=UNLOAD Jan 28 00:01:22.901000 audit: BPF prog-id=29 op=UNLOAD Jan 28 00:01:22.902000 audit: BPF prog-id=48 op=LOAD Jan 28 00:01:22.902000 audit: BPF prog-id=36 op=UNLOAD Jan 28 00:01:22.902000 audit: BPF prog-id=49 op=LOAD Jan 28 00:01:22.902000 audit: BPF prog-id=50 op=LOAD Jan 28 00:01:22.902000 audit: BPF prog-id=37 op=UNLOAD Jan 28 00:01:22.902000 audit: BPF prog-id=38 op=UNLOAD Jan 28 00:01:22.905000 audit: BPF prog-id=51 op=LOAD Jan 28 00:01:22.905000 audit: BPF prog-id=33 op=UNLOAD Jan 28 00:01:22.905000 audit: BPF prog-id=52 op=LOAD Jan 28 00:01:22.905000 audit: BPF prog-id=53 op=LOAD Jan 28 00:01:22.905000 audit: BPF prog-id=34 op=UNLOAD Jan 28 00:01:22.905000 audit: BPF prog-id=35 op=UNLOAD Jan 28 00:01:22.906000 audit: BPF prog-id=54 op=LOAD Jan 28 00:01:22.906000 audit: BPF prog-id=42 op=UNLOAD Jan 28 00:01:22.906000 audit: BPF prog-id=55 op=LOAD Jan 28 00:01:22.906000 audit: BPF prog-id=39 op=UNLOAD Jan 28 00:01:22.906000 audit: BPF prog-id=56 op=LOAD Jan 28 00:01:22.906000 audit: BPF prog-id=57 op=LOAD Jan 28 00:01:22.906000 audit: BPF prog-id=40 op=UNLOAD Jan 28 00:01:22.906000 audit: BPF prog-id=41 op=UNLOAD Jan 28 00:01:22.926565 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:01:22.933820 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:01:22.939859 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:01:22.945253 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:01:22.947000 audit: BPF prog-id=58 op=LOAD Jan 28 00:01:22.948689 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:01:22.952954 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:01:22.964314 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 28 00:01:22.976199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:01:22.978993 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:01:22.982688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:01:22.994007 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:01:22.995819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:01:22.996080 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:01:22.996181 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:01:23.000172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:01:23.000352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:01:23.000507 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:01:23.001293 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:01:23.008088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:01:23.013941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:01:23.015406 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:01:23.016814 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 28 00:01:23.016947 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 28 00:01:23.029753 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 28 00:01:23.037776 systemd[1]: Finished ensure-sysext.service. Jan 28 00:01:23.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.042000 audit: BPF prog-id=59 op=LOAD Jan 28 00:01:23.048055 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 00:01:23.056651 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 28 00:01:23.056761 kernel: [drm] features: -context_init Jan 28 00:01:23.074708 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:01:23.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.075850 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:01:23.087000 audit[1465]: SYSTEM_BOOT pid=1465 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.099344 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:01:23.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.122776 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:01:23.123096 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:01:23.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.130844 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:01:23.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.137568 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 28 00:01:23.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:23.151391 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:01:23.153149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:01:23.154852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:01:23.157526 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:01:23.164099 kernel: [drm] number of scanouts: 1 Jan 28 00:01:23.164203 kernel: [drm] number of cap sets: 0 Jan 28 00:01:23.180930 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 28 00:01:23.193629 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:01:23.196081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:01:23.208000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 28 00:01:23.208000 audit[1516]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc078fce0 a2=420 a3=0 items=0 ppid=1459 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:23.208000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:01:23.209250 augenrules[1516]: No rules Jan 28 00:01:23.214077 kernel: Console: switching to colour frame buffer device 160x50 Jan 28 00:01:23.237242 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:01:23.237582 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:01:23.238542 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:01:23.238804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:01:23.242117 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:01:23.250023 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 28 00:01:23.269546 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:01:23.295375 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:01:23.389942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:01:23.393735 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:01:23.415311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:01:23.471764 systemd-networkd[1464]: lo: Link UP Jan 28 00:01:23.471780 systemd-networkd[1464]: lo: Gained carrier Jan 28 00:01:23.475007 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:01:23.476237 systemd[1]: Reached target network.target - Network. Jan 28 00:01:23.482035 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 28 00:01:23.484111 systemd-networkd[1464]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:23.484120 systemd-networkd[1464]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:01:23.485075 systemd-networkd[1464]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:23.485090 systemd-networkd[1464]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:01:23.491446 systemd-networkd[1464]: eth0: Link UP Jan 28 00:01:23.491622 systemd-networkd[1464]: eth0: Gained carrier Jan 28 00:01:23.491650 systemd-networkd[1464]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:23.492163 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:01:23.510499 systemd-networkd[1464]: eth1: Link UP Jan 28 00:01:23.513524 systemd-networkd[1464]: eth1: Gained carrier Jan 28 00:01:23.513562 systemd-networkd[1464]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 28 00:01:23.545160 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 00:01:23.546968 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:01:23.559970 systemd-networkd[1464]: eth0: DHCPv4 address 159.69.123.112/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 28 00:01:23.569847 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:01:23.570711 systemd-networkd[1464]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 28 00:01:23.575360 systemd-timesyncd[1478]: Network configuration changed, trying to establish connection. Jan 28 00:01:23.581651 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 28 00:01:23.684095 ldconfig[1462]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:01:23.692351 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:01:23.695345 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:01:23.718721 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:01:23.721067 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:01:23.722249 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:01:23.723277 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:01:23.724584 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:01:23.725358 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:01:23.726232 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 28 00:01:23.727222 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 28 00:01:23.727958 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:01:23.728694 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:01:23.728734 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:01:23.729271 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:01:23.731537 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:01:23.735094 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:01:23.738553 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 28 00:01:23.739792 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 28 00:01:23.740752 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 28 00:01:23.747782 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:01:23.750006 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 28 00:01:23.751985 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:01:23.752965 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:01:23.753660 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:01:23.754264 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:01:23.754299 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:01:23.755680 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:01:23.759868 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 00:01:23.762150 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:01:23.764997 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:01:23.771333 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:01:23.772995 systemd-timesyncd[1478]: Contacted time server 51.75.67.47:123 (0.flatcar.pool.ntp.org). Jan 28 00:01:23.773237 systemd-timesyncd[1478]: Initial clock synchronization to Wed 2026-01-28 00:01:23.435543 UTC. Jan 28 00:01:23.777949 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:01:23.778627 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:01:23.781744 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:01:23.786967 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:01:23.791243 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 28 00:01:23.803329 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:01:23.808968 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:01:23.814468 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:01:23.815280 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:01:23.815854 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:01:23.824186 jq[1552]: false Jan 28 00:01:23.824845 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:01:23.831856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:01:23.845802 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:01:23.847140 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:01:23.847669 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:01:23.863396 extend-filesystems[1553]: Found /dev/sda6 Jan 28 00:01:23.878087 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:01:23.879712 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:01:23.885346 extend-filesystems[1553]: Found /dev/sda9 Jan 28 00:01:23.888676 jq[1566]: true Jan 28 00:01:23.897620 extend-filesystems[1553]: Checking size of /dev/sda9 Jan 28 00:01:23.910448 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:01:23.911682 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:01:23.929107 coreos-metadata[1549]: Jan 28 00:01:23.928 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 28 00:01:23.931524 coreos-metadata[1549]: Jan 28 00:01:23.931 INFO Fetch successful Jan 28 00:01:23.943783 coreos-metadata[1549]: Jan 28 00:01:23.941 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 28 00:01:23.946326 update_engine[1562]: I20260128 00:01:23.945988 1562 main.cc:92] Flatcar Update Engine starting Jan 28 00:01:23.946698 coreos-metadata[1549]: Jan 28 00:01:23.946 INFO Fetch successful Jan 28 00:01:23.954185 tar[1572]: linux-arm64/LICENSE Jan 28 00:01:23.958150 tar[1572]: linux-arm64/helm Jan 28 00:01:23.970953 jq[1589]: true Jan 28 00:01:23.972510 dbus-daemon[1550]: [system] SELinux support is enabled Jan 28 00:01:23.974069 extend-filesystems[1553]: Resized partition /dev/sda9 Jan 28 00:01:23.973248 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:01:23.980465 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:01:23.980517 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:01:23.982936 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:01:23.982972 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:01:23.988086 extend-filesystems[1605]: resize2fs 1.47.3 (8-Jul-2025) Jan 28 00:01:23.995306 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:01:24.002319 update_engine[1562]: I20260128 00:01:23.998987 1562 update_check_scheduler.cc:74] Next update check in 8m55s Jan 28 00:01:23.999538 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:01:24.012623 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 28 00:01:24.080735 systemd-logind[1561]: New seat seat0. Jan 28 00:01:24.089031 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (Power Button) Jan 28 00:01:24.089063 systemd-logind[1561]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 28 00:01:24.089396 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:01:24.103284 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 00:01:24.105275 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:01:24.135605 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 28 00:01:24.146295 extend-filesystems[1605]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 28 00:01:24.146295 extend-filesystems[1605]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 28 00:01:24.146295 extend-filesystems[1605]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 28 00:01:24.158047 extend-filesystems[1553]: Resized filesystem in /dev/sda9 Jan 28 00:01:24.160021 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:01:24.148607 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:01:24.150870 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:01:24.159999 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:01:24.165573 systemd[1]: Starting sshkeys.service... Jan 28 00:01:24.237162 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 00:01:24.239810 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 00:01:24.329957 locksmithd[1608]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:01:24.339847 coreos-metadata[1642]: Jan 28 00:01:24.339 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 28 00:01:24.339847 coreos-metadata[1642]: Jan 28 00:01:24.339 INFO Fetch successful Jan 28 00:01:24.342452 unknown[1642]: wrote ssh authorized keys file for user: core Jan 28 00:01:24.379171 containerd[1582]: time="2026-01-28T00:01:24Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 28 00:01:24.383136 containerd[1582]: time="2026-01-28T00:01:24.382547352Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 28 00:01:24.397969 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:01:24.401315 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 00:01:24.405839 systemd[1]: Finished sshkeys.service. Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418115378Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.797µs" Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418160741Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418218134Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418230471Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418397288Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418418514Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418497670Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418511884Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418842644Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418860460Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418873908Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 28 00:01:24.420889 containerd[1582]: time="2026-01-28T00:01:24.418883333Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419047353Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419060495Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419119268Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419292560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419319954Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419329686Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 28 00:01:24.421198 containerd[1582]: time="2026-01-28T00:01:24.419372175Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 28 00:01:24.423825 containerd[1582]: time="2026-01-28T00:01:24.423768457Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 28 00:01:24.423969 containerd[1582]: time="2026-01-28T00:01:24.423943741Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:01:24.430221 containerd[1582]: time="2026-01-28T00:01:24.430157807Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 28 00:01:24.430349 containerd[1582]: time="2026-01-28T00:01:24.430264050Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 00:01:24.430404 containerd[1582]: time="2026-01-28T00:01:24.430380255Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 28 00:01:24.430404 containerd[1582]: time="2026-01-28T00:01:24.430401519Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 28 00:01:24.430496 containerd[1582]: time="2026-01-28T00:01:24.430419258Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 28 00:01:24.430496 containerd[1582]: time="2026-01-28T00:01:24.430435273Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 28 00:01:24.430496 containerd[1582]: time="2026-01-28T00:01:24.430449449Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 28 00:01:24.430496 containerd[1582]: time="2026-01-28T00:01:24.430461901Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 28 00:01:24.430496 containerd[1582]: time="2026-01-28T00:01:24.430475770Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 28 00:01:24.430496 containerd[1582]: time="2026-01-28T00:01:24.430490330Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 28 00:01:24.430670 containerd[1582]: time="2026-01-28T00:01:24.430502628Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 28 00:01:24.430670 containerd[1582]: time="2026-01-28T00:01:24.430514659Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 28 00:01:24.430670 containerd[1582]: time="2026-01-28T00:01:24.430525157Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 28 00:01:24.430670 containerd[1582]: time="2026-01-28T00:01:24.430541057Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 28 00:01:24.430744 containerd[1582]: time="2026-01-28T00:01:24.430721628Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 28 00:01:24.430760 containerd[1582]: time="2026-01-28T00:01:24.430747260Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 28 00:01:24.430780 containerd[1582]: time="2026-01-28T00:01:24.430765382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 28 00:01:24.430808 containerd[1582]: time="2026-01-28T00:01:24.430778141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 28 00:01:24.430808 containerd[1582]: time="2026-01-28T00:01:24.430791665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 28 00:01:24.430808 containerd[1582]: time="2026-01-28T00:01:24.430804615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 28 00:01:24.430878 containerd[1582]: time="2026-01-28T00:01:24.430856951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 28 00:01:24.430878 containerd[1582]: time="2026-01-28T00:01:24.430869978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 28 00:01:24.430975 containerd[1582]: time="2026-01-28T00:01:24.430881472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 28 00:01:24.430975 containerd[1582]: time="2026-01-28T00:01:24.430893273Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 28 00:01:24.430975 containerd[1582]: time="2026-01-28T00:01:24.430920360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 28 00:01:24.434801 containerd[1582]: time="2026-01-28T00:01:24.433418783Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 28 00:01:24.434801 containerd[1582]: time="2026-01-28T00:01:24.433547708Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 28 00:01:24.434801 containerd[1582]: time="2026-01-28T00:01:24.433660886Z" level=info msg="Start snapshots syncer" Jan 28 00:01:24.434801 containerd[1582]: time="2026-01-28T00:01:24.433765559Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434187888Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434247312Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434321334Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434482213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434511561Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434528457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434542326Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434566924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 28 00:01:24.434986 containerd[1582]: time="2026-01-28T00:01:24.434579605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 28 00:01:24.436003 containerd[1582]: time="2026-01-28T00:01:24.435962454Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 28 00:01:24.436226 containerd[1582]: time="2026-01-28T00:01:24.436208120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 28 00:01:24.436362 containerd[1582]: time="2026-01-28T00:01:24.436342677Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 28 00:01:24.436627 containerd[1582]: time="2026-01-28T00:01:24.436609263Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:01:24.437038 containerd[1582]: time="2026-01-28T00:01:24.437011401Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 28 00:01:24.437038 containerd[1582]: time="2026-01-28T00:01:24.437036305Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:01:24.437104 containerd[1582]: time="2026-01-28T00:01:24.437052971Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 28 00:01:24.437104 containerd[1582]: time="2026-01-28T00:01:24.437062358Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 28 00:01:24.437104 containerd[1582]: time="2026-01-28T00:01:24.437082243Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 28 00:01:24.437104 containerd[1582]: time="2026-01-28T00:01:24.437097607Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 28 00:01:24.438004 containerd[1582]: time="2026-01-28T00:01:24.437207528Z" level=info msg="runtime interface created" Jan 28 00:01:24.438004 containerd[1582]: time="2026-01-28T00:01:24.437216225Z" level=info msg="created NRI interface" Jan 28 00:01:24.438004 containerd[1582]: time="2026-01-28T00:01:24.437227643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 28 00:01:24.438004 containerd[1582]: time="2026-01-28T00:01:24.437245918Z" level=info msg="Connect containerd service" Jan 28 00:01:24.438004 containerd[1582]: time="2026-01-28T00:01:24.437285726Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:01:24.442970 containerd[1582]: time="2026-01-28T00:01:24.442910262Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:01:24.627698 containerd[1582]: time="2026-01-28T00:01:24.627538735Z" level=info msg="Start subscribing containerd event" Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628021140Z" level=info msg="Start recovering state" Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628120449Z" level=info msg="Start event monitor" Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628135391Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628142517Z" level=info msg="Start streaming server" Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628152058Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628160333Z" level=info msg="runtime interface starting up..." Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628166348Z" level=info msg="starting plugins..." Jan 28 00:01:24.628741 containerd[1582]: time="2026-01-28T00:01:24.628179873Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 28 00:01:24.629725 containerd[1582]: time="2026-01-28T00:01:24.629454831Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:01:24.629725 containerd[1582]: time="2026-01-28T00:01:24.629524217Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:01:24.633224 containerd[1582]: time="2026-01-28T00:01:24.631707703Z" level=info msg="containerd successfully booted in 0.252973s" Jan 28 00:01:24.632025 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:01:24.658277 sshd_keygen[1595]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:01:24.681958 tar[1572]: linux-arm64/README.md Jan 28 00:01:24.688721 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:01:24.700360 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:01:24.705784 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:01:24.729646 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:01:24.731865 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:01:24.735797 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:01:24.762760 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:01:24.772192 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:01:24.776864 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 28 00:01:24.778040 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:01:24.795058 systemd-networkd[1464]: eth0: Gained IPv6LL Jan 28 00:01:24.800781 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:01:24.803847 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:01:24.808936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:24.812974 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:01:24.863255 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:01:25.243393 systemd-networkd[1464]: eth1: Gained IPv6LL Jan 28 00:01:25.728340 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:25.733066 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:01:25.737825 systemd[1]: Startup finished in 1.900s (kernel) + 5.355s (initrd) + 5.239s (userspace) = 12.495s. Jan 28 00:01:25.759137 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:01:26.330689 kubelet[1703]: E0128 00:01:26.330580 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:01:26.335208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:01:26.335746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:01:26.338744 systemd[1]: kubelet.service: Consumed 965ms CPU time, 256.1M memory peak. Jan 28 00:01:27.322030 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:01:27.326071 systemd[1]: Started sshd@0-159.69.123.112:22-20.161.92.111:50456.service - OpenSSH per-connection server daemon (20.161.92.111:50456). Jan 28 00:01:27.878373 sshd[1717]: Accepted publickey for core from 20.161.92.111 port 50456 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:27.881821 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:27.898670 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:01:27.900517 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:01:27.903400 systemd-logind[1561]: New session 1 of user core. Jan 28 00:01:27.931295 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:01:27.939072 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:01:27.960124 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:27.964510 systemd-logind[1561]: New session 2 of user core. Jan 28 00:01:28.110163 systemd[1723]: Queued start job for default target default.target. Jan 28 00:01:28.118922 systemd[1723]: Created slice app.slice - User Application Slice. Jan 28 00:01:28.118998 systemd[1723]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 28 00:01:28.119042 systemd[1723]: Reached target paths.target - Paths. Jan 28 00:01:28.119345 systemd[1723]: Reached target timers.target - Timers. Jan 28 00:01:28.121785 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:01:28.124865 systemd[1723]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 28 00:01:28.137625 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:01:28.137749 systemd[1723]: Reached target sockets.target - Sockets. Jan 28 00:01:28.142791 systemd[1723]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 28 00:01:28.142919 systemd[1723]: Reached target basic.target - Basic System. Jan 28 00:01:28.142988 systemd[1723]: Reached target default.target - Main User Target. Jan 28 00:01:28.143021 systemd[1723]: Startup finished in 169ms. Jan 28 00:01:28.143464 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:01:28.154944 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:01:28.454493 systemd[1]: Started sshd@1-159.69.123.112:22-20.161.92.111:50472.service - OpenSSH per-connection server daemon (20.161.92.111:50472). Jan 28 00:01:28.973971 sshd[1737]: Accepted publickey for core from 20.161.92.111 port 50472 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:28.979114 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:28.991312 systemd-logind[1561]: New session 3 of user core. Jan 28 00:01:29.000068 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:01:29.261809 sshd[1741]: Connection closed by 20.161.92.111 port 50472 Jan 28 00:01:29.262768 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jan 28 00:01:29.272072 systemd[1]: sshd@1-159.69.123.112:22-20.161.92.111:50472.service: Deactivated successfully. Jan 28 00:01:29.272474 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Jan 28 00:01:29.275228 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 00:01:29.280723 systemd-logind[1561]: Removed session 3. Jan 28 00:01:29.374150 systemd[1]: Started sshd@2-159.69.123.112:22-20.161.92.111:50488.service - OpenSSH per-connection server daemon (20.161.92.111:50488). Jan 28 00:01:29.903082 sshd[1747]: Accepted publickey for core from 20.161.92.111 port 50488 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:29.907878 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:29.918499 systemd-logind[1561]: New session 4 of user core. Jan 28 00:01:29.929977 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:01:30.183209 sshd[1751]: Connection closed by 20.161.92.111 port 50488 Jan 28 00:01:30.184265 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Jan 28 00:01:30.194771 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Jan 28 00:01:30.195231 systemd[1]: sshd@2-159.69.123.112:22-20.161.92.111:50488.service: Deactivated successfully. Jan 28 00:01:30.199696 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 00:01:30.205382 systemd-logind[1561]: Removed session 4. Jan 28 00:01:30.292440 systemd[1]: Started sshd@3-159.69.123.112:22-20.161.92.111:50496.service - OpenSSH per-connection server daemon (20.161.92.111:50496). Jan 28 00:01:30.837852 sshd[1757]: Accepted publickey for core from 20.161.92.111 port 50496 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:30.846043 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:30.853211 systemd-logind[1561]: New session 5 of user core. Jan 28 00:01:30.866099 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:01:31.123077 sshd[1761]: Connection closed by 20.161.92.111 port 50496 Jan 28 00:01:31.122849 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 28 00:01:31.132330 systemd[1]: sshd@3-159.69.123.112:22-20.161.92.111:50496.service: Deactivated successfully. Jan 28 00:01:31.137131 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 00:01:31.140729 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Jan 28 00:01:31.146666 systemd-logind[1561]: Removed session 5. Jan 28 00:01:31.228771 systemd[1]: Started sshd@4-159.69.123.112:22-20.161.92.111:50508.service - OpenSSH per-connection server daemon (20.161.92.111:50508). Jan 28 00:01:31.758680 sshd[1767]: Accepted publickey for core from 20.161.92.111 port 50508 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:31.760007 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:31.774055 systemd-logind[1561]: New session 6 of user core. Jan 28 00:01:31.781573 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:01:31.966454 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:01:31.967427 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:01:31.985021 sudo[1772]: pam_unix(sudo:session): session closed for user root Jan 28 00:01:32.077985 sshd[1771]: Connection closed by 20.161.92.111 port 50508 Jan 28 00:01:32.080252 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Jan 28 00:01:32.090136 systemd[1]: sshd@4-159.69.123.112:22-20.161.92.111:50508.service: Deactivated successfully. Jan 28 00:01:32.092803 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:01:32.094644 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:01:32.096968 systemd-logind[1561]: Removed session 6. Jan 28 00:01:32.192818 systemd[1]: Started sshd@5-159.69.123.112:22-20.161.92.111:50518.service - OpenSSH per-connection server daemon (20.161.92.111:50518). Jan 28 00:01:32.719030 sshd[1779]: Accepted publickey for core from 20.161.92.111 port 50518 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:32.722081 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:32.730755 systemd-logind[1561]: New session 7 of user core. Jan 28 00:01:32.749026 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:01:32.916366 sudo[1785]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:01:32.916869 sudo[1785]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:01:32.923142 sudo[1785]: pam_unix(sudo:session): session closed for user root Jan 28 00:01:32.936074 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 28 00:01:32.936877 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:01:32.951159 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 28 00:01:33.009000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 00:01:33.017556 kernel: kauditd_printk_skb: 184 callbacks suppressed Jan 28 00:01:33.018713 kernel: audit: type=1305 audit(1769558493.009:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 28 00:01:33.020330 kernel: audit: type=1300 audit(1769558493.009:225): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffb483230 a2=420 a3=0 items=0 ppid=1790 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:33.020415 kernel: audit: type=1327 audit(1769558493.009:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:01:33.009000 audit[1809]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffb483230 a2=420 a3=0 items=0 ppid=1790 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:33.009000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 28 00:01:33.020546 augenrules[1809]: No rules Jan 28 00:01:33.021123 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:01:33.022675 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 28 00:01:33.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.026717 sudo[1784]: pam_unix(sudo:session): session closed for user root Jan 28 00:01:33.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.032212 kernel: audit: type=1130 audit(1769558493.022:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.032316 kernel: audit: type=1131 audit(1769558493.022:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.034333 kernel: audit: type=1106 audit(1769558493.027:228): pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.027000 audit[1784]: USER_END pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.029000 audit[1784]: CRED_DISP pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.036457 kernel: audit: type=1104 audit(1769558493.029:229): pid=1784 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.130043 sshd[1783]: Connection closed by 20.161.92.111 port 50518 Jan 28 00:01:33.128941 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Jan 28 00:01:33.133000 audit[1779]: USER_END pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.133000 audit[1779]: CRED_DISP pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.143889 kernel: audit: type=1106 audit(1769558493.133:230): pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.144021 kernel: audit: type=1104 audit(1769558493.133:231): pid=1779 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.145099 systemd[1]: sshd@5-159.69.123.112:22-20.161.92.111:50518.service: Deactivated successfully. Jan 28 00:01:33.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-159.69.123.112:22-20.161.92.111:50518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.148721 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:01:33.151101 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:01:33.156773 kernel: audit: type=1131 audit(1769558493.143:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-159.69.123.112:22-20.161.92.111:50518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.158474 systemd-logind[1561]: Removed session 7. Jan 28 00:01:33.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-159.69.123.112:22-20.161.92.111:35142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.238708 systemd[1]: Started sshd@6-159.69.123.112:22-20.161.92.111:35142.service - OpenSSH per-connection server daemon (20.161.92.111:35142). Jan 28 00:01:33.767000 audit[1818]: USER_ACCT pid=1818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.768954 sshd[1818]: Accepted publickey for core from 20.161.92.111 port 35142 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:01:33.769000 audit[1818]: CRED_ACQ pid=1818 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.770000 audit[1818]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcda20ec0 a2=3 a3=0 items=0 ppid=1 pid=1818 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:33.770000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:01:33.772376 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:01:33.779887 systemd-logind[1561]: New session 8 of user core. Jan 28 00:01:33.793077 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:01:33.798000 audit[1818]: USER_START pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.801000 audit[1822]: CRED_ACQ pid=1822 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:01:33.970688 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:01:33.969000 audit[1823]: USER_ACCT pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.970000 audit[1823]: CRED_REFR pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.971000 audit[1823]: USER_START pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:01:33.971837 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:01:34.355103 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:01:34.369210 (dockerd)[1842]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:01:34.664986 dockerd[1842]: time="2026-01-28T00:01:34.664536397Z" level=info msg="Starting up" Jan 28 00:01:34.667006 dockerd[1842]: time="2026-01-28T00:01:34.666898305Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 28 00:01:34.690827 dockerd[1842]: time="2026-01-28T00:01:34.690677548Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 28 00:01:34.718246 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1521266644-merged.mount: Deactivated successfully. Jan 28 00:01:34.753459 dockerd[1842]: time="2026-01-28T00:01:34.753175863Z" level=info msg="Loading containers: start." Jan 28 00:01:34.766805 kernel: Initializing XFRM netlink socket Jan 28 00:01:34.837000 audit[1891]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.837000 audit[1891]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffc2adcc60 a2=0 a3=0 items=0 ppid=1842 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 00:01:34.840000 audit[1893]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.840000 audit[1893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd1fdd640 a2=0 a3=0 items=0 ppid=1842 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.840000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 00:01:34.842000 audit[1895]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.842000 audit[1895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffebf11bc0 a2=0 a3=0 items=0 ppid=1842 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.842000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 00:01:34.845000 audit[1897]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.845000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1163880 a2=0 a3=0 items=0 ppid=1842 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.845000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 00:01:34.849000 audit[1899]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.849000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff624f5d0 a2=0 a3=0 items=0 ppid=1842 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 00:01:34.852000 audit[1901]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.852000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffef2e0260 a2=0 a3=0 items=0 ppid=1842 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.852000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:01:34.854000 audit[1903]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.854000 audit[1903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd4ffccc0 a2=0 a3=0 items=0 ppid=1842 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.854000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 00:01:34.857000 audit[1905]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.857000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcd291ed0 a2=0 a3=0 items=0 ppid=1842 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.857000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 00:01:34.885000 audit[1908]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.885000 audit[1908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffdff73040 a2=0 a3=0 items=0 ppid=1842 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.885000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 28 00:01:34.888000 audit[1910]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.888000 audit[1910]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffe23be120 a2=0 a3=0 items=0 ppid=1842 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.888000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 00:01:34.890000 audit[1912]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.890000 audit[1912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=fffff1da55f0 a2=0 a3=0 items=0 ppid=1842 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 00:01:34.893000 audit[1914]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.893000 audit[1914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd82ff560 a2=0 a3=0 items=0 ppid=1842 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.893000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:01:34.896000 audit[1916]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.896000 audit[1916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffca399a10 a2=0 a3=0 items=0 ppid=1842 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.896000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 00:01:34.947000 audit[1946]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.947000 audit[1946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffcc283960 a2=0 a3=0 items=0 ppid=1842 pid=1946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.947000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 28 00:01:34.951000 audit[1948]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.951000 audit[1948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffffbf3cab0 a2=0 a3=0 items=0 ppid=1842 pid=1948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.951000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 28 00:01:34.954000 audit[1950]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.954000 audit[1950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1ba48e0 a2=0 a3=0 items=0 ppid=1842 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.954000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 28 00:01:34.957000 audit[1952]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.957000 audit[1952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6dbaa90 a2=0 a3=0 items=0 ppid=1842 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 28 00:01:34.960000 audit[1954]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.960000 audit[1954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffa2e0100 a2=0 a3=0 items=0 ppid=1842 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.960000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 28 00:01:34.963000 audit[1956]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.963000 audit[1956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeae94540 a2=0 a3=0 items=0 ppid=1842 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:01:34.966000 audit[1958]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=1958 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.966000 audit[1958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc4cdbb20 a2=0 a3=0 items=0 ppid=1842 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.966000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 00:01:34.968000 audit[1960]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=1960 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.968000 audit[1960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff137f0f0 a2=0 a3=0 items=0 ppid=1842 pid=1960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.968000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 28 00:01:34.972000 audit[1962]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.972000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc4361360 a2=0 a3=0 items=0 ppid=1842 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.972000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 28 00:01:34.975000 audit[1964]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.975000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffed3e2bb0 a2=0 a3=0 items=0 ppid=1842 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.975000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 28 00:01:34.979000 audit[1966]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.979000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd7b96e30 a2=0 a3=0 items=0 ppid=1842 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.979000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 28 00:01:34.981000 audit[1968]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.981000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc819a2a0 a2=0 a3=0 items=0 ppid=1842 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 28 00:01:34.984000 audit[1970]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:34.984000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffda3e0b10 a2=0 a3=0 items=0 ppid=1842 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.984000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 28 00:01:34.991000 audit[1975]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=1975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.991000 audit[1975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff27ee3d0 a2=0 a3=0 items=0 ppid=1842 pid=1975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.991000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 00:01:34.994000 audit[1977]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=1977 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.994000 audit[1977]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe0d8b740 a2=0 a3=0 items=0 ppid=1842 pid=1977 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 00:01:34.997000 audit[1979]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:34.997000 audit[1979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffde39ad50 a2=0 a3=0 items=0 ppid=1842 pid=1979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:34.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 00:01:35.000000 audit[1981]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=1981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:35.000000 audit[1981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe0359e80 a2=0 a3=0 items=0 ppid=1842 pid=1981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.000000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 28 00:01:35.004000 audit[1983]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=1983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:35.004000 audit[1983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffda7c26b0 a2=0 a3=0 items=0 ppid=1842 pid=1983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.004000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 28 00:01:35.007000 audit[1985]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=1985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:35.007000 audit[1985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd6dc5a60 a2=0 a3=0 items=0 ppid=1842 pid=1985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.007000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 28 00:01:35.031000 audit[1990]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.031000 audit[1990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffff8cc730 a2=0 a3=0 items=0 ppid=1842 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.031000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 28 00:01:35.036000 audit[1992]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=1992 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.036000 audit[1992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc539acd0 a2=0 a3=0 items=0 ppid=1842 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.036000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 28 00:01:35.048000 audit[2000]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2000 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.048000 audit[2000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc4ef6370 a2=0 a3=0 items=0 ppid=1842 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 28 00:01:35.063000 audit[2006]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.063000 audit[2006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff3507840 a2=0 a3=0 items=0 ppid=1842 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.063000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 28 00:01:35.067000 audit[2008]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.067000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffd32ba460 a2=0 a3=0 items=0 ppid=1842 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.067000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 28 00:01:35.072000 audit[2010]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.072000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffff43ddb60 a2=0 a3=0 items=0 ppid=1842 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.072000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 28 00:01:35.077000 audit[2012]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.077000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcca23df0 a2=0 a3=0 items=0 ppid=1842 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.077000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 28 00:01:35.080000 audit[2014]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:35.080000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc61691d0 a2=0 a3=0 items=0 ppid=1842 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:35.080000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 28 00:01:35.083185 systemd-networkd[1464]: docker0: Link UP Jan 28 00:01:35.093037 dockerd[1842]: time="2026-01-28T00:01:35.092205071Z" level=info msg="Loading containers: done." Jan 28 00:01:35.125298 dockerd[1842]: time="2026-01-28T00:01:35.125220144Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:01:35.125806 dockerd[1842]: time="2026-01-28T00:01:35.125771579Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 28 00:01:35.126162 dockerd[1842]: time="2026-01-28T00:01:35.126132007Z" level=info msg="Initializing buildkit" Jan 28 00:01:35.163832 dockerd[1842]: time="2026-01-28T00:01:35.163759000Z" level=info msg="Completed buildkit initialization" Jan 28 00:01:35.174626 dockerd[1842]: time="2026-01-28T00:01:35.174040595Z" level=info msg="Daemon has completed initialization" Jan 28 00:01:35.174626 dockerd[1842]: time="2026-01-28T00:01:35.174292050Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:01:35.175535 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:01:35.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:35.716404 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2247289994-merged.mount: Deactivated successfully. Jan 28 00:01:36.275904 containerd[1582]: time="2026-01-28T00:01:36.274953269Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 00:01:36.586228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:01:36.590353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:36.763146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:36.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:36.775020 (kubelet)[2061]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:01:36.852058 kubelet[2061]: E0128 00:01:36.851567 2061 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:01:36.858053 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:01:36.858236 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:01:36.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:01:36.863909 systemd[1]: kubelet.service: Consumed 197ms CPU time, 107.4M memory peak. Jan 28 00:01:37.152923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount501359576.mount: Deactivated successfully. Jan 28 00:01:37.860464 containerd[1582]: time="2026-01-28T00:01:37.860371980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:37.862245 containerd[1582]: time="2026-01-28T00:01:37.862174009Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 28 00:01:37.862857 containerd[1582]: time="2026-01-28T00:01:37.862801670Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:37.866501 containerd[1582]: time="2026-01-28T00:01:37.866427605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:37.869674 containerd[1582]: time="2026-01-28T00:01:37.869504133Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.594497622s" Jan 28 00:01:37.869773 containerd[1582]: time="2026-01-28T00:01:37.869675133Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 28 00:01:37.871013 containerd[1582]: time="2026-01-28T00:01:37.870897939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 00:01:39.176610 containerd[1582]: time="2026-01-28T00:01:39.176463691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:39.179764 containerd[1582]: time="2026-01-28T00:01:39.179675912Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 28 00:01:39.181143 containerd[1582]: time="2026-01-28T00:01:39.181064964Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:39.185996 containerd[1582]: time="2026-01-28T00:01:39.185225240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:39.186801 containerd[1582]: time="2026-01-28T00:01:39.186739138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.315792739s" Jan 28 00:01:39.186801 containerd[1582]: time="2026-01-28T00:01:39.186798081Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 28 00:01:39.188424 containerd[1582]: time="2026-01-28T00:01:39.188169076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 00:01:40.275061 containerd[1582]: time="2026-01-28T00:01:40.274769053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:40.277183 containerd[1582]: time="2026-01-28T00:01:40.277035952Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 28 00:01:40.278107 containerd[1582]: time="2026-01-28T00:01:40.278031889Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:40.286636 containerd[1582]: time="2026-01-28T00:01:40.285867329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:40.287392 containerd[1582]: time="2026-01-28T00:01:40.287341872Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.09911961s" Jan 28 00:01:40.287615 containerd[1582]: time="2026-01-28T00:01:40.287573752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 28 00:01:40.288984 containerd[1582]: time="2026-01-28T00:01:40.288936693Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 00:01:41.355457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523351483.mount: Deactivated successfully. Jan 28 00:01:41.711476 containerd[1582]: time="2026-01-28T00:01:41.711385371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:41.713275 containerd[1582]: time="2026-01-28T00:01:41.713180438Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 28 00:01:41.714661 containerd[1582]: time="2026-01-28T00:01:41.714613648Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:41.718763 containerd[1582]: time="2026-01-28T00:01:41.718690572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:41.719187 containerd[1582]: time="2026-01-28T00:01:41.719139727Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.42999837s" Jan 28 00:01:41.719187 containerd[1582]: time="2026-01-28T00:01:41.719181664Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 28 00:01:41.719788 containerd[1582]: time="2026-01-28T00:01:41.719723334Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 00:01:42.338355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2438295535.mount: Deactivated successfully. Jan 28 00:01:43.053626 containerd[1582]: time="2026-01-28T00:01:43.052199582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:43.055185 containerd[1582]: time="2026-01-28T00:01:43.055103625Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Jan 28 00:01:43.056006 containerd[1582]: time="2026-01-28T00:01:43.055960757Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:43.060846 containerd[1582]: time="2026-01-28T00:01:43.060739248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:43.062198 containerd[1582]: time="2026-01-28T00:01:43.062152280Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3423864s" Jan 28 00:01:43.062360 containerd[1582]: time="2026-01-28T00:01:43.062343042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 28 00:01:43.063627 containerd[1582]: time="2026-01-28T00:01:43.063577591Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 00:01:43.621557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2998178987.mount: Deactivated successfully. Jan 28 00:01:43.628088 containerd[1582]: time="2026-01-28T00:01:43.628023928Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:01:43.629207 containerd[1582]: time="2026-01-28T00:01:43.629140273Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 28 00:01:43.630620 containerd[1582]: time="2026-01-28T00:01:43.630343726Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:01:43.632968 containerd[1582]: time="2026-01-28T00:01:43.632929155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:01:43.634534 containerd[1582]: time="2026-01-28T00:01:43.634502411Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 570.744503ms" Jan 28 00:01:43.634768 containerd[1582]: time="2026-01-28T00:01:43.634659087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 28 00:01:43.635250 containerd[1582]: time="2026-01-28T00:01:43.635209127Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 00:01:44.335608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211934751.mount: Deactivated successfully. Jan 28 00:01:45.878156 containerd[1582]: time="2026-01-28T00:01:45.878023803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:45.880146 containerd[1582]: time="2026-01-28T00:01:45.879773085Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Jan 28 00:01:45.882194 containerd[1582]: time="2026-01-28T00:01:45.882135997Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:45.890467 containerd[1582]: time="2026-01-28T00:01:45.889160495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:01:45.892733 containerd[1582]: time="2026-01-28T00:01:45.892477125Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.25722019s" Jan 28 00:01:45.892733 containerd[1582]: time="2026-01-28T00:01:45.892535975Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 28 00:01:46.926141 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:01:46.933003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:47.123539 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 28 00:01:47.123761 kernel: audit: type=1130 audit(1769558507.120:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:47.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:47.120932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:47.132067 (kubelet)[2280]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:01:47.183339 kubelet[2280]: E0128 00:01:47.183184 2280 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:01:47.187266 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:01:47.187668 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:01:47.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:01:47.190282 systemd[1]: kubelet.service: Consumed 199ms CPU time, 107.1M memory peak. Jan 28 00:01:47.192689 kernel: audit: type=1131 audit(1769558507.186:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:01:51.197831 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:51.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:51.198879 systemd[1]: kubelet.service: Consumed 199ms CPU time, 107.1M memory peak. Jan 28 00:01:51.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:51.205359 kernel: audit: type=1130 audit(1769558511.197:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:51.205490 kernel: audit: type=1131 audit(1769558511.197:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:51.207367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:51.253279 systemd[1]: Reload requested from client PID 2294 ('systemctl') (unit session-8.scope)... Jan 28 00:01:51.253466 systemd[1]: Reloading... Jan 28 00:01:51.409944 zram_generator::config[2353]: No configuration found. Jan 28 00:01:51.587448 systemd[1]: Reloading finished in 333 ms. Jan 28 00:01:51.641000 audit: BPF prog-id=63 op=LOAD Jan 28 00:01:51.641000 audit: BPF prog-id=64 op=LOAD Jan 28 00:01:51.641000 audit: BPF prog-id=46 op=UNLOAD Jan 28 00:01:51.645092 kernel: audit: type=1334 audit(1769558511.641:289): prog-id=63 op=LOAD Jan 28 00:01:51.645156 kernel: audit: type=1334 audit(1769558511.641:290): prog-id=64 op=LOAD Jan 28 00:01:51.645179 kernel: audit: type=1334 audit(1769558511.641:291): prog-id=46 op=UNLOAD Jan 28 00:01:51.641000 audit: BPF prog-id=47 op=UNLOAD Jan 28 00:01:51.648202 kernel: audit: type=1334 audit(1769558511.641:292): prog-id=47 op=UNLOAD Jan 28 00:01:51.648260 kernel: audit: type=1334 audit(1769558511.644:293): prog-id=65 op=LOAD Jan 28 00:01:51.644000 audit: BPF prog-id=65 op=LOAD Jan 28 00:01:51.645000 audit: BPF prog-id=51 op=UNLOAD Jan 28 00:01:51.645000 audit: BPF prog-id=66 op=LOAD Jan 28 00:01:51.645000 audit: BPF prog-id=67 op=LOAD Jan 28 00:01:51.645000 audit: BPF prog-id=52 op=UNLOAD Jan 28 00:01:51.645000 audit: BPF prog-id=53 op=UNLOAD Jan 28 00:01:51.646000 audit: BPF prog-id=68 op=LOAD Jan 28 00:01:51.646000 audit: BPF prog-id=58 op=UNLOAD Jan 28 00:01:51.646000 audit: BPF prog-id=69 op=LOAD Jan 28 00:01:51.646000 audit: BPF prog-id=43 op=UNLOAD Jan 28 00:01:51.646000 audit: BPF prog-id=70 op=LOAD Jan 28 00:01:51.649641 kernel: audit: type=1334 audit(1769558511.645:294): prog-id=51 op=UNLOAD Jan 28 00:01:51.651000 audit: BPF prog-id=71 op=LOAD Jan 28 00:01:51.651000 audit: BPF prog-id=44 op=UNLOAD Jan 28 00:01:51.651000 audit: BPF prog-id=45 op=UNLOAD Jan 28 00:01:51.651000 audit: BPF prog-id=72 op=LOAD Jan 28 00:01:51.651000 audit: BPF prog-id=55 op=UNLOAD Jan 28 00:01:51.651000 audit: BPF prog-id=73 op=LOAD Jan 28 00:01:51.652000 audit: BPF prog-id=74 op=LOAD Jan 28 00:01:51.652000 audit: BPF prog-id=56 op=UNLOAD Jan 28 00:01:51.652000 audit: BPF prog-id=57 op=UNLOAD Jan 28 00:01:51.653000 audit: BPF prog-id=75 op=LOAD Jan 28 00:01:51.653000 audit: BPF prog-id=48 op=UNLOAD Jan 28 00:01:51.653000 audit: BPF prog-id=76 op=LOAD Jan 28 00:01:51.653000 audit: BPF prog-id=77 op=LOAD Jan 28 00:01:51.653000 audit: BPF prog-id=49 op=UNLOAD Jan 28 00:01:51.653000 audit: BPF prog-id=50 op=UNLOAD Jan 28 00:01:51.654000 audit: BPF prog-id=78 op=LOAD Jan 28 00:01:51.654000 audit: BPF prog-id=54 op=UNLOAD Jan 28 00:01:51.656000 audit: BPF prog-id=79 op=LOAD Jan 28 00:01:51.656000 audit: BPF prog-id=60 op=UNLOAD Jan 28 00:01:51.657000 audit: BPF prog-id=80 op=LOAD Jan 28 00:01:51.657000 audit: BPF prog-id=81 op=LOAD Jan 28 00:01:51.657000 audit: BPF prog-id=61 op=UNLOAD Jan 28 00:01:51.657000 audit: BPF prog-id=62 op=UNLOAD Jan 28 00:01:51.657000 audit: BPF prog-id=82 op=LOAD Jan 28 00:01:51.657000 audit: BPF prog-id=59 op=UNLOAD Jan 28 00:01:51.674963 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 28 00:01:51.675059 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 28 00:01:51.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 28 00:01:51.675893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:51.675978 systemd[1]: kubelet.service: Consumed 130ms CPU time, 95.1M memory peak. Jan 28 00:01:51.679174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:51.856513 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:51.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:51.866191 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:01:51.916016 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:01:51.917623 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:01:51.917623 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:01:51.917623 kubelet[2389]: I0128 00:01:51.916505 2389 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:01:52.893812 kubelet[2389]: I0128 00:01:52.893671 2389 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:01:52.893812 kubelet[2389]: I0128 00:01:52.893748 2389 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:01:52.894690 kubelet[2389]: I0128 00:01:52.894628 2389 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:01:52.938633 kubelet[2389]: E0128 00:01:52.938001 2389 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://159.69.123.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 159.69.123.112:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:01:52.941642 kubelet[2389]: I0128 00:01:52.941408 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:01:52.954038 kubelet[2389]: I0128 00:01:52.953989 2389 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:01:52.959075 kubelet[2389]: I0128 00:01:52.958609 2389 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:01:52.960081 kubelet[2389]: I0128 00:01:52.959964 2389 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:01:52.960546 kubelet[2389]: I0128 00:01:52.960283 2389 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4593-0-0-n-20383d5ef7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:01:52.960869 kubelet[2389]: I0128 00:01:52.960849 2389 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:01:52.960930 kubelet[2389]: I0128 00:01:52.960922 2389 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:01:52.961331 kubelet[2389]: I0128 00:01:52.961306 2389 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:01:52.965860 kubelet[2389]: I0128 00:01:52.965811 2389 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:01:52.966556 kubelet[2389]: I0128 00:01:52.966025 2389 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:01:52.966556 kubelet[2389]: I0128 00:01:52.966065 2389 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:01:52.966556 kubelet[2389]: I0128 00:01:52.966083 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:01:52.972362 kubelet[2389]: W0128 00:01:52.972245 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://159.69.123.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4593-0-0-n-20383d5ef7&limit=500&resourceVersion=0": dial tcp 159.69.123.112:6443: connect: connection refused Jan 28 00:01:52.972362 kubelet[2389]: E0128 00:01:52.972376 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://159.69.123.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4593-0-0-n-20383d5ef7&limit=500&resourceVersion=0\": dial tcp 159.69.123.112:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:01:52.973850 kubelet[2389]: W0128 00:01:52.973760 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://159.69.123.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 159.69.123.112:6443: connect: connection refused Jan 28 00:01:52.973850 kubelet[2389]: E0128 00:01:52.973853 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://159.69.123.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 159.69.123.112:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:01:52.974024 kubelet[2389]: I0128 00:01:52.974012 2389 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 00:01:52.974990 kubelet[2389]: I0128 00:01:52.974943 2389 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:01:52.975942 kubelet[2389]: W0128 00:01:52.975216 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:01:52.977519 kubelet[2389]: I0128 00:01:52.977463 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:01:52.977719 kubelet[2389]: I0128 00:01:52.977541 2389 server.go:1287] "Started kubelet" Jan 28 00:01:52.982367 kubelet[2389]: I0128 00:01:52.982314 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:01:52.989733 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 28 00:01:52.989923 kernel: audit: type=1325 audit(1769558512.986:331): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:52.986000 audit[2401]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2401 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:52.993015 kernel: audit: type=1300 audit(1769558512.986:331): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffce10a1c0 a2=0 a3=0 items=0 ppid=2389 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:52.986000 audit[2401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffce10a1c0 a2=0 a3=0 items=0 ppid=2389 pid=2401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:52.993394 kubelet[2389]: I0128 00:01:52.991737 2389 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:01:52.994076 kubelet[2389]: I0128 00:01:52.994013 2389 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:01:52.996184 kernel: audit: type=1327 audit(1769558512.986:331): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 00:01:52.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 00:01:52.989000 audit[2402]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:52.998053 kernel: audit: type=1325 audit(1769558512.989:332): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2402 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:52.998218 kubelet[2389]: I0128 00:01:52.997941 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:01:53.001088 kernel: audit: type=1300 audit(1769558512.989:332): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe136b6a0 a2=0 a3=0 items=0 ppid=2389 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:52.989000 audit[2402]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe136b6a0 a2=0 a3=0 items=0 ppid=2389 pid=2402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.001439 kubelet[2389]: I0128 00:01:52.998447 2389 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:01:52.989000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 00:01:53.003628 kernel: audit: type=1327 audit(1769558512.989:332): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 00:01:53.010256 kernel: audit: type=1325 audit(1769558513.004:333): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.010425 kernel: audit: type=1300 audit(1769558513.004:333): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff8df3280 a2=0 a3=0 items=0 ppid=2389 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.004000 audit[2404]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2404 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.004000 audit[2404]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff8df3280 a2=0 a3=0 items=0 ppid=2389 pid=2404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.010617 kubelet[2389]: I0128 00:01:53.010080 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:01:53.004000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:01:53.012589 kernel: audit: type=1327 audit(1769558513.004:333): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:01:53.016390 kubelet[2389]: I0128 00:01:53.016332 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:01:53.018637 kubelet[2389]: E0128 00:01:53.017662 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" Jan 28 00:01:53.019143 kubelet[2389]: I0128 00:01:53.019092 2389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:01:53.019311 kubelet[2389]: I0128 00:01:53.019298 2389 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:01:53.019000 audit[2406]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.019000 audit[2406]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffefc3c0c0 a2=0 a3=0 items=0 ppid=2389 pid=2406 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:01:53.022614 kernel: audit: type=1325 audit(1769558513.019:334): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2406 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.022984 kubelet[2389]: E0128 00:01:53.022565 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://159.69.123.112:6443/api/v1/namespaces/default/events\": dial tcp 159.69.123.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4593-0-0-n-20383d5ef7.188ebc178f4ae6f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4593-0-0-n-20383d5ef7,UID:ci-4593-0-0-n-20383d5ef7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4593-0-0-n-20383d5ef7,},FirstTimestamp:2026-01-28 00:01:52.977503992 +0000 UTC m=+1.105832306,LastTimestamp:2026-01-28 00:01:52.977503992 +0000 UTC m=+1.105832306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4593-0-0-n-20383d5ef7,}" Jan 28 00:01:53.025630 kubelet[2389]: W0128 00:01:53.025490 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.123.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.123.112:6443: connect: connection refused Jan 28 00:01:53.025630 kubelet[2389]: E0128 00:01:53.025635 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.69.123.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.123.112:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:01:53.025830 kubelet[2389]: E0128 00:01:53.025808 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.123.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-n-20383d5ef7?timeout=10s\": dial tcp 159.69.123.112:6443: connect: connection refused" interval="200ms" Jan 28 00:01:53.026197 kubelet[2389]: I0128 00:01:53.026153 2389 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:01:53.026322 kubelet[2389]: I0128 00:01:53.026285 2389 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:01:53.031798 kubelet[2389]: E0128 00:01:53.031429 2389 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:01:53.033523 kubelet[2389]: I0128 00:01:53.032054 2389 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:01:53.053000 audit[2412]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2412 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.053000 audit[2412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe19f6770 a2=0 a3=0 items=0 ppid=2389 pid=2412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 28 00:01:53.059647 kubelet[2389]: I0128 00:01:53.058551 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:01:53.058000 audit[2413]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2413 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:53.058000 audit[2413]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd71c9f20 a2=0 a3=0 items=0 ppid=2389 pid=2413 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 28 00:01:53.061412 kubelet[2389]: I0128 00:01:53.061351 2389 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:01:53.061412 kubelet[2389]: I0128 00:01:53.061405 2389 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:01:53.061618 kubelet[2389]: I0128 00:01:53.061433 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:01:53.061618 kubelet[2389]: I0128 00:01:53.061441 2389 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:01:53.061618 kubelet[2389]: E0128 00:01:53.061529 2389 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:01:53.061000 audit[2414]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2414 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.061000 audit[2414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd7e6a70 a2=0 a3=0 items=0 ppid=2389 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 00:01:53.065000 audit[2415]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2415 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:53.065000 audit[2415]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc214a920 a2=0 a3=0 items=0 ppid=2389 pid=2415 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.065000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 28 00:01:53.067000 audit[2416]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2416 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:53.067000 audit[2416]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd373d160 a2=0 a3=0 items=0 ppid=2389 pid=2416 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.067000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 00:01:53.069000 audit[2417]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2417 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.069000 audit[2417]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffffa78b10 a2=0 a3=0 items=0 ppid=2389 pid=2417 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 28 00:01:53.069000 audit[2418]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2418 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:01:53.069000 audit[2418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9361070 a2=0 a3=0 items=0 ppid=2389 pid=2418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.069000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 00:01:53.074053 kubelet[2389]: W0128 00:01:53.073974 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://159.69.123.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 159.69.123.112:6443: connect: connection refused Jan 28 00:01:53.074334 kubelet[2389]: E0128 00:01:53.074305 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://159.69.123.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 159.69.123.112:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:01:53.074000 audit[2419]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2419 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:01:53.074000 audit[2419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffbdbcc30 a2=0 a3=0 items=0 ppid=2389 pid=2419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.074000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 28 00:01:53.077483 kubelet[2389]: I0128 00:01:53.077450 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:01:53.077681 kubelet[2389]: I0128 00:01:53.077666 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:01:53.078150 kubelet[2389]: I0128 00:01:53.077733 2389 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:01:53.085474 kubelet[2389]: I0128 00:01:53.085418 2389 policy_none.go:49] "None policy: Start" Jan 28 00:01:53.085728 kubelet[2389]: I0128 00:01:53.085706 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:01:53.085848 kubelet[2389]: I0128 00:01:53.085836 2389 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:01:53.097211 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:01:53.117821 kubelet[2389]: E0128 00:01:53.117770 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" Jan 28 00:01:53.118693 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:01:53.128409 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:01:53.141548 kubelet[2389]: I0128 00:01:53.138894 2389 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:01:53.144730 kubelet[2389]: I0128 00:01:53.143756 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:01:53.144730 kubelet[2389]: I0128 00:01:53.143834 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:01:53.144730 kubelet[2389]: I0128 00:01:53.144264 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:01:53.151382 kubelet[2389]: E0128 00:01:53.151311 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:01:53.151382 kubelet[2389]: E0128 00:01:53.151391 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4593-0-0-n-20383d5ef7\" not found" Jan 28 00:01:53.183153 systemd[1]: Created slice kubepods-burstable-podd84671642873fcd842d501d0e1d32add.slice - libcontainer container kubepods-burstable-podd84671642873fcd842d501d0e1d32add.slice. Jan 28 00:01:53.212300 kubelet[2389]: E0128 00:01:53.212214 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.217819 systemd[1]: Created slice kubepods-burstable-pod42d8c7a7e8aece5ef2d333916dd4b152.slice - libcontainer container kubepods-burstable-pod42d8c7a7e8aece5ef2d333916dd4b152.slice. Jan 28 00:01:53.223188 kubelet[2389]: I0128 00:01:53.222977 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-flexvolume-dir\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223407 kubelet[2389]: I0128 00:01:53.223228 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-kubeconfig\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223407 kubelet[2389]: I0128 00:01:53.223304 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223538 kubelet[2389]: I0128 00:01:53.223395 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42d8c7a7e8aece5ef2d333916dd4b152-kubeconfig\") pod \"kube-scheduler-ci-4593-0-0-n-20383d5ef7\" (UID: \"42d8c7a7e8aece5ef2d333916dd4b152\") " pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223538 kubelet[2389]: I0128 00:01:53.223473 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b708d159e719b348fda517baecaf1164-ca-certs\") pod \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" (UID: \"b708d159e719b348fda517baecaf1164\") " pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223663 kubelet[2389]: I0128 00:01:53.223553 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-ca-certs\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223716 kubelet[2389]: I0128 00:01:53.223678 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b708d159e719b348fda517baecaf1164-k8s-certs\") pod \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" (UID: \"b708d159e719b348fda517baecaf1164\") " pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223716 kubelet[2389]: I0128 00:01:53.223706 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b708d159e719b348fda517baecaf1164-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" (UID: \"b708d159e719b348fda517baecaf1164\") " pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.223802 kubelet[2389]: I0128 00:01:53.223749 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-k8s-certs\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.227473 kubelet[2389]: E0128 00:01:53.227393 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.123.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-n-20383d5ef7?timeout=10s\": dial tcp 159.69.123.112:6443: connect: connection refused" interval="400ms" Jan 28 00:01:53.228868 kubelet[2389]: E0128 00:01:53.228795 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.234997 systemd[1]: Created slice kubepods-burstable-podb708d159e719b348fda517baecaf1164.slice - libcontainer container kubepods-burstable-podb708d159e719b348fda517baecaf1164.slice. Jan 28 00:01:53.239165 kubelet[2389]: E0128 00:01:53.239043 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.248247 kubelet[2389]: I0128 00:01:53.248130 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.249548 kubelet[2389]: E0128 00:01:53.249475 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.123.112:6443/api/v1/nodes\": dial tcp 159.69.123.112:6443: connect: connection refused" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.453700 kubelet[2389]: I0128 00:01:53.453351 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.454197 kubelet[2389]: E0128 00:01:53.454120 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.123.112:6443/api/v1/nodes\": dial tcp 159.69.123.112:6443: connect: connection refused" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.516348 containerd[1582]: time="2026-01-28T00:01:53.516145984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4593-0-0-n-20383d5ef7,Uid:d84671642873fcd842d501d0e1d32add,Namespace:kube-system,Attempt:0,}" Jan 28 00:01:53.531368 containerd[1582]: time="2026-01-28T00:01:53.531269345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4593-0-0-n-20383d5ef7,Uid:42d8c7a7e8aece5ef2d333916dd4b152,Namespace:kube-system,Attempt:0,}" Jan 28 00:01:53.541277 containerd[1582]: time="2026-01-28T00:01:53.541052954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4593-0-0-n-20383d5ef7,Uid:b708d159e719b348fda517baecaf1164,Namespace:kube-system,Attempt:0,}" Jan 28 00:01:53.568785 containerd[1582]: time="2026-01-28T00:01:53.568297830Z" level=info msg="connecting to shim 08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f" address="unix:///run/containerd/s/01df3c33092bf2b4dab0baf4da040075e26886324426359c870de682bcd7bba2" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:01:53.610035 containerd[1582]: time="2026-01-28T00:01:53.609788477Z" level=info msg="connecting to shim b7b524930e4f4d33a0d84d29d8545f3458875d96cbcbb3292754153c6ae3e92f" address="unix:///run/containerd/s/f71a339b3c2f09dd242744c172d58c4eae64476ff3d3f67a07f7a9f00317136e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:01:53.631999 kubelet[2389]: E0128 00:01:53.628434 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://159.69.123.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4593-0-0-n-20383d5ef7?timeout=10s\": dial tcp 159.69.123.112:6443: connect: connection refused" interval="800ms" Jan 28 00:01:53.649404 systemd[1]: Started cri-containerd-08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f.scope - libcontainer container 08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f. Jan 28 00:01:53.652059 containerd[1582]: time="2026-01-28T00:01:53.652001970Z" level=info msg="connecting to shim eaa08f32707128ea39bcd9271be47f3c0734af76f1ae223657197d1f0578b9a7" address="unix:///run/containerd/s/6e6f870c8b2e6dcd81d9fd29f07f8a1a6d9bad9e1276f072c5f19021df6b14d3" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:01:53.675989 systemd[1]: Started cri-containerd-b7b524930e4f4d33a0d84d29d8545f3458875d96cbcbb3292754153c6ae3e92f.scope - libcontainer container b7b524930e4f4d33a0d84d29d8545f3458875d96cbcbb3292754153c6ae3e92f. Jan 28 00:01:53.691000 audit: BPF prog-id=83 op=LOAD Jan 28 00:01:53.692000 audit: BPF prog-id=84 op=LOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.692000 audit: BPF prog-id=84 op=UNLOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.692000 audit: BPF prog-id=85 op=LOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.692000 audit: BPF prog-id=86 op=LOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.692000 audit: BPF prog-id=86 op=UNLOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.692000 audit: BPF prog-id=85 op=UNLOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.692000 audit: BPF prog-id=87 op=LOAD Jan 28 00:01:53.692000 audit[2442]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2429 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038323835393733353336313039353530333238323431653036663333 Jan 28 00:01:53.714750 systemd[1]: Started cri-containerd-eaa08f32707128ea39bcd9271be47f3c0734af76f1ae223657197d1f0578b9a7.scope - libcontainer container eaa08f32707128ea39bcd9271be47f3c0734af76f1ae223657197d1f0578b9a7. Jan 28 00:01:53.728000 audit: BPF prog-id=88 op=LOAD Jan 28 00:01:53.730000 audit: BPF prog-id=89 op=LOAD Jan 28 00:01:53.730000 audit[2472]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.730000 audit: BPF prog-id=89 op=UNLOAD Jan 28 00:01:53.730000 audit[2472]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.731000 audit: BPF prog-id=90 op=LOAD Jan 28 00:01:53.731000 audit[2472]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.731000 audit: BPF prog-id=91 op=LOAD Jan 28 00:01:53.731000 audit[2472]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.731000 audit: BPF prog-id=91 op=UNLOAD Jan 28 00:01:53.731000 audit[2472]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.731000 audit: BPF prog-id=90 op=UNLOAD Jan 28 00:01:53.731000 audit[2472]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.731000 audit: BPF prog-id=92 op=LOAD Jan 28 00:01:53.731000 audit[2472]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2456 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6237623532343933306534663464333361306438346432396438353435 Jan 28 00:01:53.754657 containerd[1582]: time="2026-01-28T00:01:53.754525744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4593-0-0-n-20383d5ef7,Uid:d84671642873fcd842d501d0e1d32add,Namespace:kube-system,Attempt:0,} returns sandbox id \"08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f\"" Jan 28 00:01:53.765816 containerd[1582]: time="2026-01-28T00:01:53.765723431Z" level=info msg="CreateContainer within sandbox \"08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:01:53.773000 audit: BPF prog-id=93 op=LOAD Jan 28 00:01:53.775000 audit: BPF prog-id=94 op=LOAD Jan 28 00:01:53.775000 audit[2506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.775000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.776000 audit: BPF prog-id=94 op=UNLOAD Jan 28 00:01:53.776000 audit[2506]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.776000 audit: BPF prog-id=95 op=LOAD Jan 28 00:01:53.776000 audit[2506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.776000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.778000 audit: BPF prog-id=96 op=LOAD Jan 28 00:01:53.778000 audit[2506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.778000 audit: BPF prog-id=96 op=UNLOAD Jan 28 00:01:53.778000 audit[2506]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.778000 audit: BPF prog-id=95 op=UNLOAD Jan 28 00:01:53.778000 audit[2506]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.778000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.779000 audit: BPF prog-id=97 op=LOAD Jan 28 00:01:53.779000 audit[2506]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2485 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.779000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561613038663332373037313238656133396263643932373162653437 Jan 28 00:01:53.793241 containerd[1582]: time="2026-01-28T00:01:53.793082208Z" level=info msg="Container 09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:01:53.811643 containerd[1582]: time="2026-01-28T00:01:53.811208571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4593-0-0-n-20383d5ef7,Uid:42d8c7a7e8aece5ef2d333916dd4b152,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7b524930e4f4d33a0d84d29d8545f3458875d96cbcbb3292754153c6ae3e92f\"" Jan 28 00:01:53.817746 containerd[1582]: time="2026-01-28T00:01:53.817657788Z" level=info msg="CreateContainer within sandbox \"b7b524930e4f4d33a0d84d29d8545f3458875d96cbcbb3292754153c6ae3e92f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:01:53.821179 containerd[1582]: time="2026-01-28T00:01:53.821115472Z" level=info msg="CreateContainer within sandbox \"08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444\"" Jan 28 00:01:53.834008 containerd[1582]: time="2026-01-28T00:01:53.833873029Z" level=info msg="Container 07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:01:53.838547 containerd[1582]: time="2026-01-28T00:01:53.838005201Z" level=info msg="StartContainer for \"09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444\"" Jan 28 00:01:53.840232 containerd[1582]: time="2026-01-28T00:01:53.840160988Z" level=info msg="connecting to shim 09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444" address="unix:///run/containerd/s/01df3c33092bf2b4dab0baf4da040075e26886324426359c870de682bcd7bba2" protocol=ttrpc version=3 Jan 28 00:01:53.849074 containerd[1582]: time="2026-01-28T00:01:53.849008938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4593-0-0-n-20383d5ef7,Uid:b708d159e719b348fda517baecaf1164,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaa08f32707128ea39bcd9271be47f3c0734af76f1ae223657197d1f0578b9a7\"" Jan 28 00:01:53.859478 containerd[1582]: time="2026-01-28T00:01:53.859006160Z" level=info msg="CreateContainer within sandbox \"eaa08f32707128ea39bcd9271be47f3c0734af76f1ae223657197d1f0578b9a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:01:53.860851 kubelet[2389]: I0128 00:01:53.860808 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.862647 containerd[1582]: time="2026-01-28T00:01:53.862328043Z" level=info msg="CreateContainer within sandbox \"b7b524930e4f4d33a0d84d29d8545f3458875d96cbcbb3292754153c6ae3e92f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598\"" Jan 28 00:01:53.864007 containerd[1582]: time="2026-01-28T00:01:53.863942985Z" level=info msg="StartContainer for \"07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598\"" Jan 28 00:01:53.865745 kubelet[2389]: E0128 00:01:53.863638 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://159.69.123.112:6443/api/v1/nodes\": dial tcp 159.69.123.112:6443: connect: connection refused" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:53.869574 containerd[1582]: time="2026-01-28T00:01:53.866906463Z" level=info msg="connecting to shim 07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598" address="unix:///run/containerd/s/f71a339b3c2f09dd242744c172d58c4eae64476ff3d3f67a07f7a9f00317136e" protocol=ttrpc version=3 Jan 28 00:01:53.902190 systemd[1]: Started cri-containerd-09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444.scope - libcontainer container 09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444. Jan 28 00:01:53.916250 containerd[1582]: time="2026-01-28T00:01:53.916185990Z" level=info msg="Container 0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:01:53.923992 systemd[1]: Started cri-containerd-07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598.scope - libcontainer container 07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598. Jan 28 00:01:53.933564 containerd[1582]: time="2026-01-28T00:01:53.933484760Z" level=info msg="CreateContainer within sandbox \"eaa08f32707128ea39bcd9271be47f3c0734af76f1ae223657197d1f0578b9a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe\"" Jan 28 00:01:53.937978 containerd[1582]: time="2026-01-28T00:01:53.935468738Z" level=info msg="StartContainer for \"0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe\"" Jan 28 00:01:53.943648 containerd[1582]: time="2026-01-28T00:01:53.942580414Z" level=info msg="connecting to shim 0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe" address="unix:///run/containerd/s/6e6f870c8b2e6dcd81d9fd29f07f8a1a6d9bad9e1276f072c5f19021df6b14d3" protocol=ttrpc version=3 Jan 28 00:01:53.949000 audit: BPF prog-id=98 op=LOAD Jan 28 00:01:53.950000 audit: BPF prog-id=99 op=LOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.950000 audit: BPF prog-id=99 op=UNLOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.950000 audit: BPF prog-id=100 op=LOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.950000 audit: BPF prog-id=101 op=LOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.950000 audit: BPF prog-id=101 op=UNLOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.950000 audit: BPF prog-id=100 op=UNLOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.950000 audit: BPF prog-id=102 op=LOAD Jan 28 00:01:53.950000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2429 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.950000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039646135333433383038383162386564336639643264346438643436 Jan 28 00:01:53.958000 audit: BPF prog-id=103 op=LOAD Jan 28 00:01:53.959000 audit: BPF prog-id=104 op=LOAD Jan 28 00:01:53.959000 audit[2568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.959000 audit: BPF prog-id=104 op=UNLOAD Jan 28 00:01:53.959000 audit[2568]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.959000 audit: BPF prog-id=105 op=LOAD Jan 28 00:01:53.959000 audit[2568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.960000 audit: BPF prog-id=106 op=LOAD Jan 28 00:01:53.960000 audit[2568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=24 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.960000 audit: BPF prog-id=106 op=UNLOAD Jan 28 00:01:53.960000 audit[2568]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=18 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.960000 audit: BPF prog-id=105 op=UNLOAD Jan 28 00:01:53.960000 audit[2568]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.960000 audit: BPF prog-id=107 op=LOAD Jan 28 00:01:53.960000 audit[2568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2456 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:53.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3037643531333665373938353132353032336539383332366236363339 Jan 28 00:01:53.998276 systemd[1]: Started cri-containerd-0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe.scope - libcontainer container 0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe. Jan 28 00:01:54.001569 kubelet[2389]: W0128 00:01:54.001439 2389 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://159.69.123.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 159.69.123.112:6443: connect: connection refused Jan 28 00:01:54.001569 kubelet[2389]: E0128 00:01:54.001536 2389 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://159.69.123.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 159.69.123.112:6443: connect: connection refused" logger="UnhandledError" Jan 28 00:01:54.035000 audit: BPF prog-id=108 op=LOAD Jan 28 00:01:54.040000 audit: BPF prog-id=109 op=LOAD Jan 28 00:01:54.040000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.040000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.042000 audit: BPF prog-id=109 op=UNLOAD Jan 28 00:01:54.042000 audit[2596]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.042000 audit: BPF prog-id=110 op=LOAD Jan 28 00:01:54.042000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.042000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.044000 audit: BPF prog-id=111 op=LOAD Jan 28 00:01:54.044000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.044000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.046000 audit: BPF prog-id=111 op=UNLOAD Jan 28 00:01:54.046000 audit[2596]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.046000 audit: BPF prog-id=110 op=UNLOAD Jan 28 00:01:54.046000 audit[2596]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.046000 audit: BPF prog-id=112 op=LOAD Jan 28 00:01:54.046000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2485 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:01:54.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065653633313261623165356166633733346132653733353762346338 Jan 28 00:01:54.054633 containerd[1582]: time="2026-01-28T00:01:54.054531604Z" level=info msg="StartContainer for \"07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598\" returns successfully" Jan 28 00:01:54.056357 containerd[1582]: time="2026-01-28T00:01:54.056225223Z" level=info msg="StartContainer for \"09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444\" returns successfully" Jan 28 00:01:54.110323 kubelet[2389]: E0128 00:01:54.110211 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:54.122636 kubelet[2389]: E0128 00:01:54.122501 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:54.134779 containerd[1582]: time="2026-01-28T00:01:54.134717761Z" level=info msg="StartContainer for \"0ee6312ab1e5afc734a2e7357b4c8ffc5866e36862ba6d230a4f31a64f895cbe\" returns successfully" Jan 28 00:01:54.669334 kubelet[2389]: I0128 00:01:54.669256 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:55.126474 kubelet[2389]: E0128 00:01:55.126362 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:55.128347 kubelet[2389]: E0128 00:01:55.128284 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:56.130250 kubelet[2389]: E0128 00:01:56.129995 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:56.299199 kubelet[2389]: E0128 00:01:56.298982 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.133646 kubelet[2389]: E0128 00:01:57.132795 2389 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.144811 kubelet[2389]: E0128 00:01:57.144716 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4593-0-0-n-20383d5ef7\" not found" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.192772 kubelet[2389]: I0128 00:01:57.192695 2389 kubelet_node_status.go:78] "Successfully registered node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.219193 kubelet[2389]: I0128 00:01:57.218787 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.255066 kubelet[2389]: E0128 00:01:57.255018 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.255444 kubelet[2389]: I0128 00:01:57.255313 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.259764 kubelet[2389]: E0128 00:01:57.259659 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4593-0-0-n-20383d5ef7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.259764 kubelet[2389]: I0128 00:01:57.259704 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.267194 kubelet[2389]: E0128 00:01:57.267124 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.439765 kubelet[2389]: I0128 00:01:57.439728 2389 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.447343 kubelet[2389]: E0128 00:01:57.447198 2389 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:01:57.974702 kubelet[2389]: I0128 00:01:57.974655 2389 apiserver.go:52] "Watching apiserver" Jan 28 00:01:58.019971 kubelet[2389]: I0128 00:01:58.019924 2389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:01:59.326397 systemd[1]: Reload requested from client PID 2655 ('systemctl') (unit session-8.scope)... Jan 28 00:01:59.326417 systemd[1]: Reloading... Jan 28 00:01:59.473630 zram_generator::config[2711]: No configuration found. Jan 28 00:01:59.708488 systemd[1]: Reloading finished in 381 ms. Jan 28 00:01:59.746166 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:59.763219 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:01:59.765696 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:59.769751 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 28 00:01:59.769817 kernel: audit: type=1131 audit(1769558519.764:391): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:59.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:59.765808 systemd[1]: kubelet.service: Consumed 1.743s CPU time, 128.3M memory peak. Jan 28 00:01:59.771818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:01:59.774914 kernel: audit: type=1334 audit(1769558519.771:392): prog-id=113 op=LOAD Jan 28 00:01:59.775049 kernel: audit: type=1334 audit(1769558519.771:393): prog-id=79 op=UNLOAD Jan 28 00:01:59.771000 audit: BPF prog-id=113 op=LOAD Jan 28 00:01:59.771000 audit: BPF prog-id=79 op=UNLOAD Jan 28 00:01:59.773000 audit: BPF prog-id=114 op=LOAD Jan 28 00:01:59.776266 kernel: audit: type=1334 audit(1769558519.773:394): prog-id=114 op=LOAD Jan 28 00:01:59.773000 audit: BPF prog-id=115 op=LOAD Jan 28 00:01:59.777015 kernel: audit: type=1334 audit(1769558519.773:395): prog-id=115 op=LOAD Jan 28 00:01:59.773000 audit: BPF prog-id=80 op=UNLOAD Jan 28 00:01:59.773000 audit: BPF prog-id=81 op=UNLOAD Jan 28 00:01:59.777625 kernel: audit: type=1334 audit(1769558519.773:396): prog-id=80 op=UNLOAD Jan 28 00:01:59.778637 kernel: audit: type=1334 audit(1769558519.773:397): prog-id=81 op=UNLOAD Jan 28 00:01:59.781205 kernel: audit: type=1334 audit(1769558519.778:398): prog-id=116 op=LOAD Jan 28 00:01:59.781308 kernel: audit: type=1334 audit(1769558519.778:399): prog-id=75 op=UNLOAD Jan 28 00:01:59.778000 audit: BPF prog-id=116 op=LOAD Jan 28 00:01:59.778000 audit: BPF prog-id=75 op=UNLOAD Jan 28 00:01:59.778000 audit: BPF prog-id=117 op=LOAD Jan 28 00:01:59.782266 kernel: audit: type=1334 audit(1769558519.778:400): prog-id=117 op=LOAD Jan 28 00:01:59.779000 audit: BPF prog-id=118 op=LOAD Jan 28 00:01:59.779000 audit: BPF prog-id=76 op=UNLOAD Jan 28 00:01:59.779000 audit: BPF prog-id=77 op=UNLOAD Jan 28 00:01:59.779000 audit: BPF prog-id=119 op=LOAD Jan 28 00:01:59.780000 audit: BPF prog-id=120 op=LOAD Jan 28 00:01:59.780000 audit: BPF prog-id=63 op=UNLOAD Jan 28 00:01:59.780000 audit: BPF prog-id=64 op=UNLOAD Jan 28 00:01:59.784000 audit: BPF prog-id=121 op=LOAD Jan 28 00:01:59.784000 audit: BPF prog-id=82 op=UNLOAD Jan 28 00:01:59.784000 audit: BPF prog-id=122 op=LOAD Jan 28 00:01:59.784000 audit: BPF prog-id=72 op=UNLOAD Jan 28 00:01:59.784000 audit: BPF prog-id=123 op=LOAD Jan 28 00:01:59.784000 audit: BPF prog-id=124 op=LOAD Jan 28 00:01:59.784000 audit: BPF prog-id=73 op=UNLOAD Jan 28 00:01:59.784000 audit: BPF prog-id=74 op=UNLOAD Jan 28 00:01:59.785000 audit: BPF prog-id=125 op=LOAD Jan 28 00:01:59.791000 audit: BPF prog-id=68 op=UNLOAD Jan 28 00:01:59.792000 audit: BPF prog-id=126 op=LOAD Jan 28 00:01:59.792000 audit: BPF prog-id=65 op=UNLOAD Jan 28 00:01:59.792000 audit: BPF prog-id=127 op=LOAD Jan 28 00:01:59.792000 audit: BPF prog-id=128 op=LOAD Jan 28 00:01:59.792000 audit: BPF prog-id=66 op=UNLOAD Jan 28 00:01:59.792000 audit: BPF prog-id=67 op=UNLOAD Jan 28 00:01:59.794000 audit: BPF prog-id=129 op=LOAD Jan 28 00:01:59.794000 audit: BPF prog-id=78 op=UNLOAD Jan 28 00:01:59.796000 audit: BPF prog-id=130 op=LOAD Jan 28 00:01:59.796000 audit: BPF prog-id=69 op=UNLOAD Jan 28 00:01:59.796000 audit: BPF prog-id=131 op=LOAD Jan 28 00:01:59.796000 audit: BPF prog-id=132 op=LOAD Jan 28 00:01:59.796000 audit: BPF prog-id=70 op=UNLOAD Jan 28 00:01:59.796000 audit: BPF prog-id=71 op=UNLOAD Jan 28 00:01:59.951172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:01:59.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:01:59.967176 (kubelet)[2747]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:02:00.033257 kubelet[2747]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:02:00.033257 kubelet[2747]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:02:00.033257 kubelet[2747]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:02:00.035827 kubelet[2747]: I0128 00:02:00.033389 2747 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:02:00.044645 kubelet[2747]: I0128 00:02:00.044334 2747 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 00:02:00.044645 kubelet[2747]: I0128 00:02:00.044377 2747 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:02:00.046054 kubelet[2747]: I0128 00:02:00.045814 2747 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 00:02:00.049240 kubelet[2747]: I0128 00:02:00.049201 2747 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 00:02:00.058758 kubelet[2747]: I0128 00:02:00.058389 2747 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:02:00.063195 kubelet[2747]: I0128 00:02:00.063165 2747 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 00:02:00.066309 kubelet[2747]: I0128 00:02:00.066270 2747 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 00:02:00.066530 kubelet[2747]: I0128 00:02:00.066497 2747 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:02:00.066786 kubelet[2747]: I0128 00:02:00.066532 2747 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4593-0-0-n-20383d5ef7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:02:00.066786 kubelet[2747]: I0128 00:02:00.066767 2747 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:02:00.066786 kubelet[2747]: I0128 00:02:00.066777 2747 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 00:02:00.066953 kubelet[2747]: I0128 00:02:00.066829 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:02:00.067028 kubelet[2747]: I0128 00:02:00.067018 2747 kubelet.go:446] "Attempting to sync node with API server" Jan 28 00:02:00.067054 kubelet[2747]: I0128 00:02:00.067039 2747 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:02:00.067109 kubelet[2747]: I0128 00:02:00.067067 2747 kubelet.go:352] "Adding apiserver pod source" Jan 28 00:02:00.067652 kubelet[2747]: I0128 00:02:00.067133 2747 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:02:00.069117 kubelet[2747]: I0128 00:02:00.069011 2747 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 28 00:02:00.070732 kubelet[2747]: I0128 00:02:00.069932 2747 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 00:02:00.074839 kubelet[2747]: I0128 00:02:00.074450 2747 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 00:02:00.074839 kubelet[2747]: I0128 00:02:00.074504 2747 server.go:1287] "Started kubelet" Jan 28 00:02:00.078632 kubelet[2747]: I0128 00:02:00.078556 2747 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:02:00.081776 kubelet[2747]: I0128 00:02:00.081709 2747 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:02:00.082574 kubelet[2747]: I0128 00:02:00.082535 2747 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:02:00.084601 kubelet[2747]: I0128 00:02:00.084550 2747 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 00:02:00.084890 kubelet[2747]: E0128 00:02:00.084861 2747 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4593-0-0-n-20383d5ef7\" not found" Jan 28 00:02:00.085668 kubelet[2747]: I0128 00:02:00.085644 2747 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 00:02:00.085791 kubelet[2747]: I0128 00:02:00.085778 2747 reconciler.go:26] "Reconciler: start to sync state" Jan 28 00:02:00.087648 kubelet[2747]: I0128 00:02:00.087625 2747 server.go:479] "Adding debug handlers to kubelet server" Jan 28 00:02:00.095868 kubelet[2747]: I0128 00:02:00.093314 2747 factory.go:221] Registration of the systemd container factory successfully Jan 28 00:02:00.096841 kubelet[2747]: I0128 00:02:00.096809 2747 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:02:00.097720 kubelet[2747]: I0128 00:02:00.096804 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 00:02:00.102153 kubelet[2747]: I0128 00:02:00.101751 2747 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 00:02:00.103024 kubelet[2747]: I0128 00:02:00.102990 2747 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 00:02:00.104983 kubelet[2747]: I0128 00:02:00.104943 2747 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:02:00.105070 kubelet[2747]: I0128 00:02:00.105061 2747 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 00:02:00.105258 kubelet[2747]: E0128 00:02:00.105232 2747 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:02:00.116630 kubelet[2747]: I0128 00:02:00.096875 2747 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:02:00.116630 kubelet[2747]: I0128 00:02:00.115264 2747 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:02:00.116630 kubelet[2747]: I0128 00:02:00.116042 2747 factory.go:221] Registration of the containerd container factory successfully Jan 28 00:02:00.118826 kubelet[2747]: E0128 00:02:00.118798 2747 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:02:00.199533 kubelet[2747]: I0128 00:02:00.199499 2747 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:02:00.199533 kubelet[2747]: I0128 00:02:00.199528 2747 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:02:00.199754 kubelet[2747]: I0128 00:02:00.199563 2747 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:02:00.200058 kubelet[2747]: I0128 00:02:00.200010 2747 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:02:00.200107 kubelet[2747]: I0128 00:02:00.200041 2747 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:02:00.200107 kubelet[2747]: I0128 00:02:00.200086 2747 policy_none.go:49] "None policy: Start" Jan 28 00:02:00.200107 kubelet[2747]: I0128 00:02:00.200099 2747 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 00:02:00.200184 kubelet[2747]: I0128 00:02:00.200114 2747 state_mem.go:35] "Initializing new in-memory state store" Jan 28 00:02:00.200315 kubelet[2747]: I0128 00:02:00.200298 2747 state_mem.go:75] "Updated machine memory state" Jan 28 00:02:00.205624 kubelet[2747]: E0128 00:02:00.205562 2747 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 00:02:00.206960 kubelet[2747]: I0128 00:02:00.206020 2747 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 00:02:00.206960 kubelet[2747]: I0128 00:02:00.206230 2747 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:02:00.206960 kubelet[2747]: I0128 00:02:00.206243 2747 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:02:00.207489 kubelet[2747]: I0128 00:02:00.207466 2747 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:02:00.208390 kubelet[2747]: E0128 00:02:00.208345 2747 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:02:00.316516 kubelet[2747]: I0128 00:02:00.316407 2747 kubelet_node_status.go:75] "Attempting to register node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.333705 kubelet[2747]: I0128 00:02:00.333554 2747 kubelet_node_status.go:124] "Node was previously registered" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.333705 kubelet[2747]: I0128 00:02:00.333678 2747 kubelet_node_status.go:78] "Successfully registered node" node="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.407001 kubelet[2747]: I0128 00:02:00.406119 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.407001 kubelet[2747]: I0128 00:02:00.406579 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.407001 kubelet[2747]: I0128 00:02:00.406864 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.486999 kubelet[2747]: I0128 00:02:00.486938 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487141 kubelet[2747]: I0128 00:02:00.487037 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42d8c7a7e8aece5ef2d333916dd4b152-kubeconfig\") pod \"kube-scheduler-ci-4593-0-0-n-20383d5ef7\" (UID: \"42d8c7a7e8aece5ef2d333916dd4b152\") " pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487141 kubelet[2747]: I0128 00:02:00.487071 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b708d159e719b348fda517baecaf1164-k8s-certs\") pod \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" (UID: \"b708d159e719b348fda517baecaf1164\") " pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487141 kubelet[2747]: I0128 00:02:00.487090 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-ca-certs\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487141 kubelet[2747]: I0128 00:02:00.487108 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-k8s-certs\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487280 kubelet[2747]: I0128 00:02:00.487147 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-kubeconfig\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487280 kubelet[2747]: I0128 00:02:00.487165 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b708d159e719b348fda517baecaf1164-ca-certs\") pod \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" (UID: \"b708d159e719b348fda517baecaf1164\") " pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487280 kubelet[2747]: I0128 00:02:00.487180 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b708d159e719b348fda517baecaf1164-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" (UID: \"b708d159e719b348fda517baecaf1164\") " pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:00.487280 kubelet[2747]: I0128 00:02:00.487234 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d84671642873fcd842d501d0e1d32add-flexvolume-dir\") pod \"kube-controller-manager-ci-4593-0-0-n-20383d5ef7\" (UID: \"d84671642873fcd842d501d0e1d32add\") " pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:01.083492 kubelet[2747]: I0128 00:02:01.083173 2747 apiserver.go:52] "Watching apiserver" Jan 28 00:02:01.169507 kubelet[2747]: I0128 00:02:01.169271 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:01.170849 kubelet[2747]: I0128 00:02:01.170816 2747 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:01.181012 kubelet[2747]: E0128 00:02:01.180918 2747 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4593-0-0-n-20383d5ef7\" already exists" pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:01.184088 kubelet[2747]: E0128 00:02:01.184045 2747 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4593-0-0-n-20383d5ef7\" already exists" pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:01.189904 kubelet[2747]: I0128 00:02:01.189866 2747 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 00:02:01.238532 kubelet[2747]: I0128 00:02:01.238331 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4593-0-0-n-20383d5ef7" podStartSLOduration=1.23830741 podStartE2EDuration="1.23830741s" podCreationTimestamp="2026-01-28 00:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:02:01.220434705 +0000 UTC m=+1.247317624" watchObservedRunningTime="2026-01-28 00:02:01.23830741 +0000 UTC m=+1.265190329" Jan 28 00:02:01.239006 kubelet[2747]: I0128 00:02:01.238539 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4593-0-0-n-20383d5ef7" podStartSLOduration=1.238533152 podStartE2EDuration="1.238533152s" podCreationTimestamp="2026-01-28 00:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:02:01.23502102 +0000 UTC m=+1.261903979" watchObservedRunningTime="2026-01-28 00:02:01.238533152 +0000 UTC m=+1.265416071" Jan 28 00:02:01.270726 kubelet[2747]: I0128 00:02:01.269987 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4593-0-0-n-20383d5ef7" podStartSLOduration=1.2699631679999999 podStartE2EDuration="1.269963168s" podCreationTimestamp="2026-01-28 00:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:02:01.253802297 +0000 UTC m=+1.280685216" watchObservedRunningTime="2026-01-28 00:02:01.269963168 +0000 UTC m=+1.296846087" Jan 28 00:02:05.077666 kubelet[2747]: I0128 00:02:05.077509 2747 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:02:05.078798 containerd[1582]: time="2026-01-28T00:02:05.078741245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:02:05.079702 kubelet[2747]: I0128 00:02:05.079369 2747 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:02:05.846378 systemd[1]: Created slice kubepods-besteffort-pod1bda565f_6919_43f3_b8c3_cd62f43850b6.slice - libcontainer container kubepods-besteffort-pod1bda565f_6919_43f3_b8c3_cd62f43850b6.slice. Jan 28 00:02:06.026815 kubelet[2747]: I0128 00:02:06.026368 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1bda565f-6919-43f3-b8c3-cd62f43850b6-kube-proxy\") pod \"kube-proxy-hbpfm\" (UID: \"1bda565f-6919-43f3-b8c3-cd62f43850b6\") " pod="kube-system/kube-proxy-hbpfm" Jan 28 00:02:06.027621 kubelet[2747]: I0128 00:02:06.027015 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1bda565f-6919-43f3-b8c3-cd62f43850b6-lib-modules\") pod \"kube-proxy-hbpfm\" (UID: \"1bda565f-6919-43f3-b8c3-cd62f43850b6\") " pod="kube-system/kube-proxy-hbpfm" Jan 28 00:02:06.027621 kubelet[2747]: I0128 00:02:06.027072 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1bda565f-6919-43f3-b8c3-cd62f43850b6-xtables-lock\") pod \"kube-proxy-hbpfm\" (UID: \"1bda565f-6919-43f3-b8c3-cd62f43850b6\") " pod="kube-system/kube-proxy-hbpfm" Jan 28 00:02:06.027621 kubelet[2747]: I0128 00:02:06.027117 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zvtj\" (UniqueName: \"kubernetes.io/projected/1bda565f-6919-43f3-b8c3-cd62f43850b6-kube-api-access-6zvtj\") pod \"kube-proxy-hbpfm\" (UID: \"1bda565f-6919-43f3-b8c3-cd62f43850b6\") " pod="kube-system/kube-proxy-hbpfm" Jan 28 00:02:06.161949 containerd[1582]: time="2026-01-28T00:02:06.161267545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hbpfm,Uid:1bda565f-6919-43f3-b8c3-cd62f43850b6,Namespace:kube-system,Attempt:0,}" Jan 28 00:02:06.209271 containerd[1582]: time="2026-01-28T00:02:06.208698542Z" level=info msg="connecting to shim 09913bf6f76bce3527358a907aef0129a9627c1c2ca222622f8ed2c8cd56cb56" address="unix:///run/containerd/s/2e6d5ed5abd2ab6dcfad468f20d89c4b3505824e7c9581ad2daf823b9d5457bf" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:06.215512 systemd[1]: Created slice kubepods-besteffort-podd0d72465_f99f_4309_be1a_b6b00e8a2e61.slice - libcontainer container kubepods-besteffort-podd0d72465_f99f_4309_be1a_b6b00e8a2e61.slice. Jan 28 00:02:06.256135 systemd[1]: Started cri-containerd-09913bf6f76bce3527358a907aef0129a9627c1c2ca222622f8ed2c8cd56cb56.scope - libcontainer container 09913bf6f76bce3527358a907aef0129a9627c1c2ca222622f8ed2c8cd56cb56. Jan 28 00:02:06.268000 audit: BPF prog-id=133 op=LOAD Jan 28 00:02:06.270763 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 28 00:02:06.270851 kernel: audit: type=1334 audit(1769558526.268:433): prog-id=133 op=LOAD Jan 28 00:02:06.270000 audit: BPF prog-id=134 op=LOAD Jan 28 00:02:06.270000 audit[2811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.274530 kernel: audit: type=1334 audit(1769558526.270:434): prog-id=134 op=LOAD Jan 28 00:02:06.274737 kernel: audit: type=1300 audit(1769558526.270:434): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.274827 kernel: audit: type=1327 audit(1769558526.270:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.270000 audit: BPF prog-id=134 op=UNLOAD Jan 28 00:02:06.281722 kernel: audit: type=1334 audit(1769558526.270:435): prog-id=134 op=UNLOAD Jan 28 00:02:06.281878 kernel: audit: type=1300 audit(1769558526.270:435): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.270000 audit[2811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.287798 kernel: audit: type=1327 audit(1769558526.270:435): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.287948 kernel: audit: type=1334 audit(1769558526.272:436): prog-id=135 op=LOAD Jan 28 00:02:06.272000 audit: BPF prog-id=135 op=LOAD Jan 28 00:02:06.272000 audit[2811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.291016 kernel: audit: type=1300 audit(1769558526.272:436): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.291070 kernel: audit: type=1327 audit(1769558526.272:436): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.274000 audit: BPF prog-id=136 op=LOAD Jan 28 00:02:06.274000 audit[2811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.274000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.279000 audit: BPF prog-id=136 op=UNLOAD Jan 28 00:02:06.279000 audit[2811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.279000 audit: BPF prog-id=135 op=UNLOAD Jan 28 00:02:06.279000 audit[2811]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.280000 audit: BPF prog-id=137 op=LOAD Jan 28 00:02:06.280000 audit[2811]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2799 pid=2811 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039393133626636663736626365333532373335386139303761656630 Jan 28 00:02:06.316025 containerd[1582]: time="2026-01-28T00:02:06.315943717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hbpfm,Uid:1bda565f-6919-43f3-b8c3-cd62f43850b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"09913bf6f76bce3527358a907aef0129a9627c1c2ca222622f8ed2c8cd56cb56\"" Jan 28 00:02:06.320721 containerd[1582]: time="2026-01-28T00:02:06.320679600Z" level=info msg="CreateContainer within sandbox \"09913bf6f76bce3527358a907aef0129a9627c1c2ca222622f8ed2c8cd56cb56\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:02:06.331016 kubelet[2747]: I0128 00:02:06.330966 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vf9\" (UniqueName: \"kubernetes.io/projected/d0d72465-f99f-4309-be1a-b6b00e8a2e61-kube-api-access-r9vf9\") pod \"tigera-operator-7dcd859c48-86j49\" (UID: \"d0d72465-f99f-4309-be1a-b6b00e8a2e61\") " pod="tigera-operator/tigera-operator-7dcd859c48-86j49" Jan 28 00:02:06.331394 kubelet[2747]: I0128 00:02:06.331061 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0d72465-f99f-4309-be1a-b6b00e8a2e61-var-lib-calico\") pod \"tigera-operator-7dcd859c48-86j49\" (UID: \"d0d72465-f99f-4309-be1a-b6b00e8a2e61\") " pod="tigera-operator/tigera-operator-7dcd859c48-86j49" Jan 28 00:02:06.336852 containerd[1582]: time="2026-01-28T00:02:06.335347343Z" level=info msg="Container 1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:06.339044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862245039.mount: Deactivated successfully. Jan 28 00:02:06.349500 containerd[1582]: time="2026-01-28T00:02:06.349417721Z" level=info msg="CreateContainer within sandbox \"09913bf6f76bce3527358a907aef0129a9627c1c2ca222622f8ed2c8cd56cb56\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5\"" Jan 28 00:02:06.350692 containerd[1582]: time="2026-01-28T00:02:06.350515313Z" level=info msg="StartContainer for \"1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5\"" Jan 28 00:02:06.354762 containerd[1582]: time="2026-01-28T00:02:06.354715242Z" level=info msg="connecting to shim 1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5" address="unix:///run/containerd/s/2e6d5ed5abd2ab6dcfad468f20d89c4b3505824e7c9581ad2daf823b9d5457bf" protocol=ttrpc version=3 Jan 28 00:02:06.378955 systemd[1]: Started cri-containerd-1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5.scope - libcontainer container 1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5. Jan 28 00:02:06.441000 audit: BPF prog-id=138 op=LOAD Jan 28 00:02:06.441000 audit[2836]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2799 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163393865343262373632336633633133333164623335393834396361 Jan 28 00:02:06.441000 audit: BPF prog-id=139 op=LOAD Jan 28 00:02:06.441000 audit[2836]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2799 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163393865343262373632336633633133333164623335393834396361 Jan 28 00:02:06.441000 audit: BPF prog-id=139 op=UNLOAD Jan 28 00:02:06.441000 audit[2836]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2799 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163393865343262373632336633633133333164623335393834396361 Jan 28 00:02:06.441000 audit: BPF prog-id=138 op=UNLOAD Jan 28 00:02:06.441000 audit[2836]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2799 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163393865343262373632336633633133333164623335393834396361 Jan 28 00:02:06.441000 audit: BPF prog-id=140 op=LOAD Jan 28 00:02:06.441000 audit[2836]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2799 pid=2836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.441000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3163393865343262373632336633633133333164623335393834396361 Jan 28 00:02:06.479883 containerd[1582]: time="2026-01-28T00:02:06.479812555Z" level=info msg="StartContainer for \"1c98e42b7623f3c1331db359849cae08ffe5b7adac571a3b40d6d8c9519390d5\" returns successfully" Jan 28 00:02:06.523813 containerd[1582]: time="2026-01-28T00:02:06.523339006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-86j49,Uid:d0d72465-f99f-4309-be1a-b6b00e8a2e61,Namespace:tigera-operator,Attempt:0,}" Jan 28 00:02:06.546929 containerd[1582]: time="2026-01-28T00:02:06.546878987Z" level=info msg="connecting to shim 72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304" address="unix:///run/containerd/s/f600fcd0e4e11649f2ecbd4fa8c9e0030cae7ef039868c16f0226aad24319a60" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:06.582040 systemd[1]: Started cri-containerd-72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304.scope - libcontainer container 72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304. Jan 28 00:02:06.600000 audit: BPF prog-id=141 op=LOAD Jan 28 00:02:06.601000 audit: BPF prog-id=142 op=LOAD Jan 28 00:02:06.601000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.601000 audit: BPF prog-id=142 op=UNLOAD Jan 28 00:02:06.601000 audit[2887]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.601000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.602000 audit: BPF prog-id=143 op=LOAD Jan 28 00:02:06.602000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.602000 audit: BPF prog-id=144 op=LOAD Jan 28 00:02:06.602000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.602000 audit: BPF prog-id=144 op=UNLOAD Jan 28 00:02:06.602000 audit[2887]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.602000 audit: BPF prog-id=143 op=UNLOAD Jan 28 00:02:06.602000 audit[2887]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.602000 audit: BPF prog-id=145 op=LOAD Jan 28 00:02:06.602000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2874 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.602000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643762656562666434363436663233383738646437373836656430 Jan 28 00:02:06.646234 containerd[1582]: time="2026-01-28T00:02:06.646167358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-86j49,Uid:d0d72465-f99f-4309-be1a-b6b00e8a2e61,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304\"" Jan 28 00:02:06.653435 containerd[1582]: time="2026-01-28T00:02:06.653390327Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 00:02:06.682000 audit[2945]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2945 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.682000 audit[2945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffeaa1d70 a2=0 a3=1 items=0 ppid=2849 pid=2945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 00:02:06.688000 audit[2946]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_chain pid=2946 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.688000 audit[2946]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc025b1d0 a2=0 a3=1 items=0 ppid=2849 pid=2946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.688000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 00:02:06.690000 audit[2948]: NETFILTER_CFG table=filter:56 family=10 entries=1 op=nft_register_chain pid=2948 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.690000 audit[2948]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6409530 a2=0 a3=1 items=0 ppid=2849 pid=2948 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.690000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 00:02:06.692000 audit[2949]: NETFILTER_CFG table=mangle:57 family=2 entries=1 op=nft_register_chain pid=2949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.692000 audit[2949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2162d10 a2=0 a3=1 items=0 ppid=2849 pid=2949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.692000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 28 00:02:06.695000 audit[2950]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.695000 audit[2950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd78b5b40 a2=0 a3=1 items=0 ppid=2849 pid=2950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 28 00:02:06.698000 audit[2951]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=2951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.698000 audit[2951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc82c4230 a2=0 a3=1 items=0 ppid=2849 pid=2951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.698000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 28 00:02:06.785000 audit[2952]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=2952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.785000 audit[2952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffdba5a220 a2=0 a3=1 items=0 ppid=2849 pid=2952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 00:02:06.789000 audit[2954]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=2954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.789000 audit[2954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffb8fc5d0 a2=0 a3=1 items=0 ppid=2849 pid=2954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 28 00:02:06.793000 audit[2957]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=2957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.793000 audit[2957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff31a8250 a2=0 a3=1 items=0 ppid=2849 pid=2957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 28 00:02:06.794000 audit[2958]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=2958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.794000 audit[2958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0d2d120 a2=0 a3=1 items=0 ppid=2849 pid=2958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 00:02:06.799000 audit[2960]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=2960 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.799000 audit[2960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffffd70ab0 a2=0 a3=1 items=0 ppid=2849 pid=2960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 00:02:06.802000 audit[2961]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=2961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.802000 audit[2961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4f2f3a0 a2=0 a3=1 items=0 ppid=2849 pid=2961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 00:02:06.808000 audit[2963]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=2963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.808000 audit[2963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe4813aa0 a2=0 a3=1 items=0 ppid=2849 pid=2963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.808000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 00:02:06.813000 audit[2966]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=2966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.813000 audit[2966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcba04ae0 a2=0 a3=1 items=0 ppid=2849 pid=2966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 28 00:02:06.814000 audit[2967]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.814000 audit[2967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6fef650 a2=0 a3=1 items=0 ppid=2849 pid=2967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 00:02:06.817000 audit[2969]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2969 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.817000 audit[2969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffe6dffa0 a2=0 a3=1 items=0 ppid=2849 pid=2969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 00:02:06.819000 audit[2970]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=2970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.819000 audit[2970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcbb46790 a2=0 a3=1 items=0 ppid=2849 pid=2970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 00:02:06.822000 audit[2972]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=2972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.822000 audit[2972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc1c96360 a2=0 a3=1 items=0 ppid=2849 pid=2972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.822000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 00:02:06.827000 audit[2975]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2975 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.827000 audit[2975]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffde99ce0 a2=0 a3=1 items=0 ppid=2849 pid=2975 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 00:02:06.831000 audit[2978]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=2978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.831000 audit[2978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb593910 a2=0 a3=1 items=0 ppid=2849 pid=2978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.831000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 00:02:06.833000 audit[2979]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.833000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffdca6110 a2=0 a3=1 items=0 ppid=2849 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.833000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 00:02:06.836000 audit[2981]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=2981 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.836000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe3b9f040 a2=0 a3=1 items=0 ppid=2849 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:02:06.840000 audit[2984]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=2984 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.840000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe25f3d30 a2=0 a3=1 items=0 ppid=2849 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.840000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:02:06.842000 audit[2985]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.842000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe04f6730 a2=0 a3=1 items=0 ppid=2849 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.842000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 00:02:06.845000 audit[2987]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=2987 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 28 00:02:06.845000 audit[2987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffda7389b0 a2=0 a3=1 items=0 ppid=2849 pid=2987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.845000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 00:02:06.876000 audit[2993]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:06.876000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff162a160 a2=0 a3=1 items=0 ppid=2849 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:06.883000 audit[2993]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=2993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:06.883000 audit[2993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff162a160 a2=0 a3=1 items=0 ppid=2849 pid=2993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.883000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:06.888000 audit[2998]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=2998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.888000 audit[2998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff3927290 a2=0 a3=1 items=0 ppid=2849 pid=2998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.888000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 28 00:02:06.893000 audit[3000]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.893000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc8bcdaf0 a2=0 a3=1 items=0 ppid=2849 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 28 00:02:06.898000 audit[3003]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.898000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe2723ff0 a2=0 a3=1 items=0 ppid=2849 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 28 00:02:06.900000 audit[3004]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.900000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8274050 a2=0 a3=1 items=0 ppid=2849 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 28 00:02:06.904000 audit[3006]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.904000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcdd802a0 a2=0 a3=1 items=0 ppid=2849 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 28 00:02:06.905000 audit[3007]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.905000 audit[3007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcaf2edd0 a2=0 a3=1 items=0 ppid=2849 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 28 00:02:06.909000 audit[3009]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.909000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd801f1a0 a2=0 a3=1 items=0 ppid=2849 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.909000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 28 00:02:06.913000 audit[3012]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.913000 audit[3012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdf076a10 a2=0 a3=1 items=0 ppid=2849 pid=3012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 28 00:02:06.915000 audit[3013]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.915000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe931a310 a2=0 a3=1 items=0 ppid=2849 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 28 00:02:06.918000 audit[3015]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.918000 audit[3015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc68185a0 a2=0 a3=1 items=0 ppid=2849 pid=3015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.918000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 28 00:02:06.920000 audit[3016]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.920000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb145190 a2=0 a3=1 items=0 ppid=2849 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 28 00:02:06.923000 audit[3018]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.923000 audit[3018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffbfea200 a2=0 a3=1 items=0 ppid=2849 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.923000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 28 00:02:06.927000 audit[3021]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.927000 audit[3021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed65bc50 a2=0 a3=1 items=0 ppid=2849 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.927000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 28 00:02:06.932000 audit[3024]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.932000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc452f2c0 a2=0 a3=1 items=0 ppid=2849 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.932000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 28 00:02:06.934000 audit[3025]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.934000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc2d7c6e0 a2=0 a3=1 items=0 ppid=2849 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.934000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 28 00:02:06.938000 audit[3027]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.938000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffec3d230 a2=0 a3=1 items=0 ppid=2849 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.938000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:02:06.944000 audit[3030]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.944000 audit[3030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc8071b80 a2=0 a3=1 items=0 ppid=2849 pid=3030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.944000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 28 00:02:06.947000 audit[3031]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3031 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.947000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1a18280 a2=0 a3=1 items=0 ppid=2849 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.947000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 28 00:02:06.951000 audit[3033]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3033 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.951000 audit[3033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd7245cb0 a2=0 a3=1 items=0 ppid=2849 pid=3033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.951000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 28 00:02:06.953000 audit[3034]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3034 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.953000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0248d30 a2=0 a3=1 items=0 ppid=2849 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.953000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 28 00:02:06.956000 audit[3036]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3036 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.956000 audit[3036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff8b11c00 a2=0 a3=1 items=0 ppid=2849 pid=3036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.956000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:02:06.961000 audit[3039]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 28 00:02:06.961000 audit[3039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffe225e20 a2=0 a3=1 items=0 ppid=2849 pid=3039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.961000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 28 00:02:06.966000 audit[3041]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 00:02:06.966000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffc1fd13e0 a2=0 a3=1 items=0 ppid=2849 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.966000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:06.966000 audit[3041]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 28 00:02:06.966000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc1fd13e0 a2=0 a3=1 items=0 ppid=2849 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:06.966000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:08.788134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192899251.mount: Deactivated successfully. Jan 28 00:02:09.245143 update_engine[1562]: I20260128 00:02:09.244978 1562 update_attempter.cc:509] Updating boot flags... Jan 28 00:02:10.319064 kubelet[2747]: I0128 00:02:10.318931 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hbpfm" podStartSLOduration=5.318893343 podStartE2EDuration="5.318893343s" podCreationTimestamp="2026-01-28 00:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:02:07.209257584 +0000 UTC m=+7.236140503" watchObservedRunningTime="2026-01-28 00:02:10.318893343 +0000 UTC m=+10.345776262" Jan 28 00:02:14.931345 containerd[1582]: time="2026-01-28T00:02:14.931262172Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:14.932623 containerd[1582]: time="2026-01-28T00:02:14.932352607Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 28 00:02:14.933415 containerd[1582]: time="2026-01-28T00:02:14.933372233Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:14.937908 containerd[1582]: time="2026-01-28T00:02:14.936714470Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:14.937908 containerd[1582]: time="2026-01-28T00:02:14.937455096Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 8.28401804s" Jan 28 00:02:14.937908 containerd[1582]: time="2026-01-28T00:02:14.937486460Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 28 00:02:14.940898 containerd[1582]: time="2026-01-28T00:02:14.940851940Z" level=info msg="CreateContainer within sandbox \"72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 00:02:14.953133 containerd[1582]: time="2026-01-28T00:02:14.952432993Z" level=info msg="Container 59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:14.962423 containerd[1582]: time="2026-01-28T00:02:14.962360810Z" level=info msg="CreateContainer within sandbox \"72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7\"" Jan 28 00:02:14.964915 containerd[1582]: time="2026-01-28T00:02:14.964817961Z" level=info msg="StartContainer for \"59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7\"" Jan 28 00:02:14.966197 containerd[1582]: time="2026-01-28T00:02:14.966157032Z" level=info msg="connecting to shim 59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7" address="unix:///run/containerd/s/f600fcd0e4e11649f2ecbd4fa8c9e0030cae7ef039868c16f0226aad24319a60" protocol=ttrpc version=3 Jan 28 00:02:14.990914 systemd[1]: Started cri-containerd-59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7.scope - libcontainer container 59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7. Jan 28 00:02:15.003000 audit: BPF prog-id=146 op=LOAD Jan 28 00:02:15.006141 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 28 00:02:15.006243 kernel: audit: type=1334 audit(1769558535.003:505): prog-id=146 op=LOAD Jan 28 00:02:15.006000 audit: BPF prog-id=147 op=LOAD Jan 28 00:02:15.014867 kernel: audit: type=1334 audit(1769558535.006:506): prog-id=147 op=LOAD Jan 28 00:02:15.014981 kernel: audit: type=1300 audit(1769558535.006:506): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.015018 kernel: audit: type=1327 audit(1769558535.006:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.006000 audit[3067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.006000 audit: BPF prog-id=147 op=UNLOAD Jan 28 00:02:15.017728 kernel: audit: type=1334 audit(1769558535.006:507): prog-id=147 op=UNLOAD Jan 28 00:02:15.017772 kernel: audit: type=1300 audit(1769558535.006:507): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.006000 audit[3067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.020117 kernel: audit: type=1327 audit(1769558535.006:507): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.006000 audit: BPF prog-id=148 op=LOAD Jan 28 00:02:15.023998 kernel: audit: type=1334 audit(1769558535.006:508): prog-id=148 op=LOAD Jan 28 00:02:15.024057 kernel: audit: type=1300 audit(1769558535.006:508): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.006000 audit[3067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.006000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.028691 kernel: audit: type=1327 audit(1769558535.006:508): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.007000 audit: BPF prog-id=149 op=LOAD Jan 28 00:02:15.007000 audit[3067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.007000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.015000 audit: BPF prog-id=149 op=UNLOAD Jan 28 00:02:15.015000 audit[3067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.015000 audit: BPF prog-id=148 op=UNLOAD Jan 28 00:02:15.015000 audit[3067]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.015000 audit: BPF prog-id=150 op=LOAD Jan 28 00:02:15.015000 audit[3067]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2874 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:15.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539663764613132336636663739326562333038383063363735373630 Jan 28 00:02:15.051836 containerd[1582]: time="2026-01-28T00:02:15.051682118Z" level=info msg="StartContainer for \"59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7\" returns successfully" Jan 28 00:02:15.229141 kubelet[2747]: I0128 00:02:15.228343 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-86j49" podStartSLOduration=0.941233565 podStartE2EDuration="9.22831896s" podCreationTimestamp="2026-01-28 00:02:06 +0000 UTC" firstStartedPulling="2026-01-28 00:02:06.651149612 +0000 UTC m=+6.678032531" lastFinishedPulling="2026-01-28 00:02:14.938235007 +0000 UTC m=+14.965117926" observedRunningTime="2026-01-28 00:02:15.227883901 +0000 UTC m=+15.254766820" watchObservedRunningTime="2026-01-28 00:02:15.22831896 +0000 UTC m=+15.255201879" Jan 28 00:02:21.496357 sudo[1823]: pam_unix(sudo:session): session closed for user root Jan 28 00:02:21.499070 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 28 00:02:21.499103 kernel: audit: type=1106 audit(1769558541.494:513): pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:02:21.494000 audit[1823]: USER_END pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:02:21.494000 audit[1823]: CRED_DISP pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:02:21.502723 kernel: audit: type=1104 audit(1769558541.494:514): pid=1823 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 28 00:02:21.593484 sshd[1822]: Connection closed by 20.161.92.111 port 35142 Jan 28 00:02:21.595794 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Jan 28 00:02:21.595000 audit[1818]: USER_END pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:02:21.595000 audit[1818]: CRED_DISP pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:02:21.606885 kernel: audit: type=1106 audit(1769558541.595:515): pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:02:21.606952 kernel: audit: type=1104 audit(1769558541.595:516): pid=1818 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:02:21.608252 systemd[1]: sshd@6-159.69.123.112:22-20.161.92.111:35142.service: Deactivated successfully. Jan 28 00:02:21.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-159.69.123.112:22-20.161.92.111:35142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:02:21.616248 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:02:21.616538 systemd[1]: session-8.scope: Consumed 7.285s CPU time, 220.6M memory peak. Jan 28 00:02:21.616761 kernel: audit: type=1131 audit(1769558541.610:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-159.69.123.112:22-20.161.92.111:35142 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:02:21.617785 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:02:21.619806 systemd-logind[1561]: Removed session 8. Jan 28 00:02:23.407438 kernel: audit: type=1325 audit(1769558543.401:518): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:23.407694 kernel: audit: type=1300 audit(1769558543.401:518): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffaeff380 a2=0 a3=1 items=0 ppid=2849 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:23.401000 audit[3146]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:23.401000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffaeff380 a2=0 a3=1 items=0 ppid=2849 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:23.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:23.410528 kernel: audit: type=1327 audit(1769558543.401:518): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:23.407000 audit[3146]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:23.413010 kernel: audit: type=1325 audit(1769558543.407:519): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:23.407000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffaeff380 a2=0 a3=1 items=0 ppid=2849 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:23.407000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:23.418612 kernel: audit: type=1300 audit(1769558543.407:519): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffaeff380 a2=0 a3=1 items=0 ppid=2849 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:23.427000 audit[3148]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:23.427000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc06387a0 a2=0 a3=1 items=0 ppid=2849 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:23.427000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:23.433000 audit[3148]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:23.433000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc06387a0 a2=0 a3=1 items=0 ppid=2849 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:23.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.631037 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 28 00:02:29.631325 kernel: audit: type=1325 audit(1769558549.628:522): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.628000 audit[3151]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.628000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd70e9e70 a2=0 a3=1 items=0 ppid=2849 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.634856 kernel: audit: type=1300 audit(1769558549.628:522): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd70e9e70 a2=0 a3=1 items=0 ppid=2849 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.636352 kernel: audit: type=1327 audit(1769558549.628:522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.638000 audit[3151]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.641651 kernel: audit: type=1325 audit(1769558549.638:523): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.638000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd70e9e70 a2=0 a3=1 items=0 ppid=2849 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.638000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.648676 kernel: audit: type=1300 audit(1769558549.638:523): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd70e9e70 a2=0 a3=1 items=0 ppid=2849 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.650684 kernel: audit: type=1327 audit(1769558549.638:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.659000 audit[3153]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.666498 kernel: audit: type=1325 audit(1769558549.659:524): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.666648 kernel: audit: type=1300 audit(1769558549.659:524): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc9e10b20 a2=0 a3=1 items=0 ppid=2849 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.659000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc9e10b20 a2=0 a3=1 items=0 ppid=2849 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.668663 kernel: audit: type=1327 audit(1769558549.659:524): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:29.669000 audit[3153]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.669000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc9e10b20 a2=0 a3=1 items=0 ppid=2849 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:29.671654 kernel: audit: type=1325 audit(1769558549.669:525): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:29.669000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:30.690000 audit[3156]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:30.690000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff6821410 a2=0 a3=1 items=0 ppid=2849 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:30.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:30.698000 audit[3156]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:30.698000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff6821410 a2=0 a3=1 items=0 ppid=2849 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:30.698000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:33.351000 audit[3158]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:33.351000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe71f8560 a2=0 a3=1 items=0 ppid=2849 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.351000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:33.359000 audit[3158]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:33.359000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe71f8560 a2=0 a3=1 items=0 ppid=2849 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.359000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:33.420847 systemd[1]: Created slice kubepods-besteffort-pod90c8beb3_a90d_4d62_b3cf_182dc1ac9bb2.slice - libcontainer container kubepods-besteffort-pod90c8beb3_a90d_4d62_b3cf_182dc1ac9bb2.slice. Jan 28 00:02:33.458000 audit[3160]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:33.458000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff0e2df40 a2=0 a3=1 items=0 ppid=2849 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:33.461000 audit[3160]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3160 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:33.461000 audit[3160]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff0e2df40 a2=0 a3=1 items=0 ppid=2849 pid=3160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.461000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:33.515875 kubelet[2747]: I0128 00:02:33.515771 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2-tigera-ca-bundle\") pod \"calico-typha-65f558cf8d-vx4dv\" (UID: \"90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2\") " pod="calico-system/calico-typha-65f558cf8d-vx4dv" Jan 28 00:02:33.515875 kubelet[2747]: I0128 00:02:33.515842 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2-typha-certs\") pod \"calico-typha-65f558cf8d-vx4dv\" (UID: \"90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2\") " pod="calico-system/calico-typha-65f558cf8d-vx4dv" Jan 28 00:02:33.515875 kubelet[2747]: I0128 00:02:33.515881 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wqc\" (UniqueName: \"kubernetes.io/projected/90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2-kube-api-access-h9wqc\") pod \"calico-typha-65f558cf8d-vx4dv\" (UID: \"90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2\") " pod="calico-system/calico-typha-65f558cf8d-vx4dv" Jan 28 00:02:33.572550 systemd[1]: Created slice kubepods-besteffort-pod32b87076_c1f1_4030_8a73_b8fef7380399.slice - libcontainer container kubepods-besteffort-pod32b87076_c1f1_4030_8a73_b8fef7380399.slice. Jan 28 00:02:33.689925 kubelet[2747]: E0128 00:02:33.689442 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:33.691239 kubelet[2747]: I0128 00:02:33.691140 2747 status_manager.go:890] "Failed to get status for pod" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" pod="calico-system/csi-node-driver-w6mgn" err="pods \"csi-node-driver-w6mgn\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" Jan 28 00:02:33.716763 kubelet[2747]: I0128 00:02:33.716716 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-cni-log-dir\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.717696 kubelet[2747]: I0128 00:02:33.717573 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-var-run-calico\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.717919 kubelet[2747]: I0128 00:02:33.717675 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32b87076-c1f1-4030-8a73-b8fef7380399-node-certs\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.717919 kubelet[2747]: I0128 00:02:33.717883 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-cni-bin-dir\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718224 kubelet[2747]: I0128 00:02:33.718081 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-lib-modules\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718224 kubelet[2747]: I0128 00:02:33.718124 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-cni-net-dir\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718224 kubelet[2747]: I0128 00:02:33.718163 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-flexvol-driver-host\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718224 kubelet[2747]: I0128 00:02:33.718185 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32b87076-c1f1-4030-8a73-b8fef7380399-tigera-ca-bundle\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718224 kubelet[2747]: I0128 00:02:33.718203 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-var-lib-calico\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718466 kubelet[2747]: I0128 00:02:33.718319 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-policysync\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718466 kubelet[2747]: I0128 00:02:33.718411 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2qjd\" (UniqueName: \"kubernetes.io/projected/32b87076-c1f1-4030-8a73-b8fef7380399-kube-api-access-f2qjd\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.718536 kubelet[2747]: I0128 00:02:33.718461 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32b87076-c1f1-4030-8a73-b8fef7380399-xtables-lock\") pod \"calico-node-4lx2l\" (UID: \"32b87076-c1f1-4030-8a73-b8fef7380399\") " pod="calico-system/calico-node-4lx2l" Jan 28 00:02:33.729485 containerd[1582]: time="2026-01-28T00:02:33.729409319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65f558cf8d-vx4dv,Uid:90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:33.774919 containerd[1582]: time="2026-01-28T00:02:33.774839169Z" level=info msg="connecting to shim b57eef55ed81d48975322e07fd481283b9598ef3c2d30d2e7f0abff5462fdb45" address="unix:///run/containerd/s/a40847aefabfe2be05e198cae4200b1ad334f57a3776dfda60d2129b73ef0fb0" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:33.818938 kubelet[2747]: I0128 00:02:33.818833 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2955546a-cb98-4307-9f9a-44877b3e7017-socket-dir\") pod \"csi-node-driver-w6mgn\" (UID: \"2955546a-cb98-4307-9f9a-44877b3e7017\") " pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:33.819214 kubelet[2747]: I0128 00:02:33.819096 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2955546a-cb98-4307-9f9a-44877b3e7017-registration-dir\") pod \"csi-node-driver-w6mgn\" (UID: \"2955546a-cb98-4307-9f9a-44877b3e7017\") " pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:33.819214 kubelet[2747]: I0128 00:02:33.819178 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ncft\" (UniqueName: \"kubernetes.io/projected/2955546a-cb98-4307-9f9a-44877b3e7017-kube-api-access-5ncft\") pod \"csi-node-driver-w6mgn\" (UID: \"2955546a-cb98-4307-9f9a-44877b3e7017\") " pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:33.819498 kubelet[2747]: I0128 00:02:33.819478 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2955546a-cb98-4307-9f9a-44877b3e7017-kubelet-dir\") pod \"csi-node-driver-w6mgn\" (UID: \"2955546a-cb98-4307-9f9a-44877b3e7017\") " pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:33.819827 kubelet[2747]: I0128 00:02:33.819743 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2955546a-cb98-4307-9f9a-44877b3e7017-varrun\") pod \"csi-node-driver-w6mgn\" (UID: \"2955546a-cb98-4307-9f9a-44877b3e7017\") " pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:33.820933 systemd[1]: Started cri-containerd-b57eef55ed81d48975322e07fd481283b9598ef3c2d30d2e7f0abff5462fdb45.scope - libcontainer container b57eef55ed81d48975322e07fd481283b9598ef3c2d30d2e7f0abff5462fdb45. Jan 28 00:02:33.824466 kubelet[2747]: E0128 00:02:33.824218 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.824735 kubelet[2747]: W0128 00:02:33.824712 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.824881 kubelet[2747]: E0128 00:02:33.824822 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.826015 kubelet[2747]: E0128 00:02:33.825967 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.826015 kubelet[2747]: W0128 00:02:33.825985 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.826285 kubelet[2747]: E0128 00:02:33.826109 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.826651 kubelet[2747]: E0128 00:02:33.826556 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.826753 kubelet[2747]: W0128 00:02:33.826587 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.826902 kubelet[2747]: E0128 00:02:33.826811 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.828223 kubelet[2747]: E0128 00:02:33.828199 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.828223 kubelet[2747]: W0128 00:02:33.828253 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.828223 kubelet[2747]: E0128 00:02:33.828274 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.835848 kubelet[2747]: E0128 00:02:33.835809 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.835848 kubelet[2747]: W0128 00:02:33.835840 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.835848 kubelet[2747]: E0128 00:02:33.835868 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.847903 kubelet[2747]: E0128 00:02:33.847866 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.847903 kubelet[2747]: W0128 00:02:33.847891 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.847903 kubelet[2747]: E0128 00:02:33.847916 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.857000 audit: BPF prog-id=151 op=LOAD Jan 28 00:02:33.858000 audit: BPF prog-id=152 op=LOAD Jan 28 00:02:33.858000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.858000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.860000 audit: BPF prog-id=152 op=UNLOAD Jan 28 00:02:33.860000 audit[3183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.860000 audit: BPF prog-id=153 op=LOAD Jan 28 00:02:33.860000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.860000 audit: BPF prog-id=154 op=LOAD Jan 28 00:02:33.860000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.861000 audit: BPF prog-id=154 op=UNLOAD Jan 28 00:02:33.861000 audit[3183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.861000 audit: BPF prog-id=153 op=UNLOAD Jan 28 00:02:33.861000 audit[3183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.861000 audit: BPF prog-id=155 op=LOAD Jan 28 00:02:33.861000 audit[3183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3172 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6235376565663535656438316434383937353332326530376664343831 Jan 28 00:02:33.879900 containerd[1582]: time="2026-01-28T00:02:33.879856949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4lx2l,Uid:32b87076-c1f1-4030-8a73-b8fef7380399,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:33.907250 containerd[1582]: time="2026-01-28T00:02:33.907187841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65f558cf8d-vx4dv,Uid:90c8beb3-a90d-4d62-b3cf-182dc1ac9bb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"b57eef55ed81d48975322e07fd481283b9598ef3c2d30d2e7f0abff5462fdb45\"" Jan 28 00:02:33.910579 containerd[1582]: time="2026-01-28T00:02:33.910260818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 00:02:33.920510 kubelet[2747]: E0128 00:02:33.920473 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.920510 kubelet[2747]: W0128 00:02:33.920498 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.920510 kubelet[2747]: E0128 00:02:33.920538 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.920916 kubelet[2747]: E0128 00:02:33.920743 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.920916 kubelet[2747]: W0128 00:02:33.920752 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.920916 kubelet[2747]: E0128 00:02:33.920769 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.921015 kubelet[2747]: E0128 00:02:33.920926 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.921015 kubelet[2747]: W0128 00:02:33.920934 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.921015 kubelet[2747]: E0128 00:02:33.920942 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.921111 kubelet[2747]: E0128 00:02:33.921061 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.921111 kubelet[2747]: W0128 00:02:33.921067 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.921111 kubelet[2747]: E0128 00:02:33.921075 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.921683 kubelet[2747]: E0128 00:02:33.921207 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.921683 kubelet[2747]: W0128 00:02:33.921221 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.921683 kubelet[2747]: E0128 00:02:33.921229 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.921683 kubelet[2747]: E0128 00:02:33.921364 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.921683 kubelet[2747]: W0128 00:02:33.921371 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.921683 kubelet[2747]: E0128 00:02:33.921379 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.921683 kubelet[2747]: E0128 00:02:33.921684 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.922494 kubelet[2747]: W0128 00:02:33.921697 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.922494 kubelet[2747]: E0128 00:02:33.921717 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.922494 kubelet[2747]: E0128 00:02:33.921943 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.922494 kubelet[2747]: W0128 00:02:33.921955 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.922494 kubelet[2747]: E0128 00:02:33.921971 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.922494 kubelet[2747]: E0128 00:02:33.922196 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.922494 kubelet[2747]: W0128 00:02:33.922230 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.922494 kubelet[2747]: E0128 00:02:33.922260 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.923301 kubelet[2747]: E0128 00:02:33.923139 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.923301 kubelet[2747]: W0128 00:02:33.923191 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.923301 kubelet[2747]: E0128 00:02:33.923298 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.923834 kubelet[2747]: E0128 00:02:33.923808 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.923923 kubelet[2747]: W0128 00:02:33.923907 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.924115 kubelet[2747]: E0128 00:02:33.924088 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.924802 kubelet[2747]: E0128 00:02:33.924748 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.925005 kubelet[2747]: W0128 00:02:33.924782 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.925061 kubelet[2747]: E0128 00:02:33.925027 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.925499 kubelet[2747]: E0128 00:02:33.925459 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.925723 kubelet[2747]: W0128 00:02:33.925478 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.926005 kubelet[2747]: E0128 00:02:33.925973 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.927711 kubelet[2747]: E0128 00:02:33.927664 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.928030 kubelet[2747]: W0128 00:02:33.927822 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.928030 kubelet[2747]: E0128 00:02:33.927940 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.928445 kubelet[2747]: E0128 00:02:33.928402 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.928445 kubelet[2747]: W0128 00:02:33.928422 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.928663 kubelet[2747]: E0128 00:02:33.928636 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.928998 kubelet[2747]: E0128 00:02:33.928947 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.929169 kubelet[2747]: W0128 00:02:33.929071 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.929239 kubelet[2747]: E0128 00:02:33.929220 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.930223 kubelet[2747]: E0128 00:02:33.930172 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.930223 kubelet[2747]: W0128 00:02:33.930195 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.930712 kubelet[2747]: E0128 00:02:33.930565 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.930859 kubelet[2747]: E0128 00:02:33.930846 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.930933 kubelet[2747]: W0128 00:02:33.930911 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.931149 kubelet[2747]: E0128 00:02:33.931134 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.931714 kubelet[2747]: E0128 00:02:33.931695 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.931975 kubelet[2747]: W0128 00:02:33.931804 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.932475 kubelet[2747]: E0128 00:02:33.932134 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.932827 kubelet[2747]: E0128 00:02:33.932812 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.932910 kubelet[2747]: W0128 00:02:33.932899 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.933008 kubelet[2747]: E0128 00:02:33.932984 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.933290 kubelet[2747]: E0128 00:02:33.933276 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.933404 kubelet[2747]: W0128 00:02:33.933341 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.933643 kubelet[2747]: E0128 00:02:33.933503 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.933844 kubelet[2747]: E0128 00:02:33.933811 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.933844 kubelet[2747]: W0128 00:02:33.933827 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.934019 kubelet[2747]: E0128 00:02:33.933986 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.934247 kubelet[2747]: E0128 00:02:33.934218 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.934247 kubelet[2747]: W0128 00:02:33.934233 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.934386 kubelet[2747]: E0128 00:02:33.934332 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.934665 kubelet[2747]: E0128 00:02:33.934650 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.935240 kubelet[2747]: W0128 00:02:33.934745 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.935240 kubelet[2747]: E0128 00:02:33.934774 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.935702 kubelet[2747]: E0128 00:02:33.935634 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.935702 kubelet[2747]: W0128 00:02:33.935651 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.935702 kubelet[2747]: E0128 00:02:33.935664 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.937473 containerd[1582]: time="2026-01-28T00:02:33.937392415Z" level=info msg="connecting to shim dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085" address="unix:///run/containerd/s/b8945374d275a1b84d6f7608098699d1ce08532745ce208446fbf94eee02bcfc" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:33.955985 kubelet[2747]: E0128 00:02:33.953094 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:33.955985 kubelet[2747]: W0128 00:02:33.953279 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:33.955985 kubelet[2747]: E0128 00:02:33.953318 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:33.981037 systemd[1]: Started cri-containerd-dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085.scope - libcontainer container dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085. Jan 28 00:02:33.991000 audit: BPF prog-id=156 op=LOAD Jan 28 00:02:33.992000 audit: BPF prog-id=157 op=LOAD Jan 28 00:02:33.992000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:33.993000 audit: BPF prog-id=157 op=UNLOAD Jan 28 00:02:33.993000 audit[3265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:33.993000 audit: BPF prog-id=158 op=LOAD Jan 28 00:02:33.993000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:33.994000 audit: BPF prog-id=159 op=LOAD Jan 28 00:02:33.994000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:33.994000 audit: BPF prog-id=159 op=UNLOAD Jan 28 00:02:33.994000 audit[3265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:33.994000 audit: BPF prog-id=158 op=UNLOAD Jan 28 00:02:33.994000 audit[3265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.994000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:33.995000 audit: BPF prog-id=160 op=LOAD Jan 28 00:02:33.995000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3245 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:33.995000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463663962343665373761333330313835383531636138626465656164 Jan 28 00:02:34.019177 containerd[1582]: time="2026-01-28T00:02:34.019123557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4lx2l,Uid:32b87076-c1f1-4030-8a73-b8fef7380399,Namespace:calico-system,Attempt:0,} returns sandbox id \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\"" Jan 28 00:02:34.487000 audit[3291]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:34.487000 audit[3291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffffcf94f00 a2=0 a3=1 items=0 ppid=2849 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:34.487000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:34.496000 audit[3291]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:34.496000 audit[3291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffcf94f00 a2=0 a3=1 items=0 ppid=2849 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:34.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:35.107015 kubelet[2747]: E0128 00:02:35.106943 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:35.458517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4239159384.mount: Deactivated successfully. Jan 28 00:02:36.466772 containerd[1582]: time="2026-01-28T00:02:36.466676703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:36.468758 containerd[1582]: time="2026-01-28T00:02:36.468644392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 28 00:02:36.470230 containerd[1582]: time="2026-01-28T00:02:36.470155531Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:36.474770 containerd[1582]: time="2026-01-28T00:02:36.474616143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:36.475945 containerd[1582]: time="2026-01-28T00:02:36.475732376Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.565420595s" Jan 28 00:02:36.475945 containerd[1582]: time="2026-01-28T00:02:36.475782419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 28 00:02:36.480109 containerd[1582]: time="2026-01-28T00:02:36.480061739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 00:02:36.501783 containerd[1582]: time="2026-01-28T00:02:36.500986107Z" level=info msg="CreateContainer within sandbox \"b57eef55ed81d48975322e07fd481283b9598ef3c2d30d2e7f0abff5462fdb45\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 00:02:36.515756 containerd[1582]: time="2026-01-28T00:02:36.515698589Z" level=info msg="Container 51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:36.527801 containerd[1582]: time="2026-01-28T00:02:36.527702454Z" level=info msg="CreateContainer within sandbox \"b57eef55ed81d48975322e07fd481283b9598ef3c2d30d2e7f0abff5462fdb45\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2\"" Jan 28 00:02:36.528933 containerd[1582]: time="2026-01-28T00:02:36.528638515Z" level=info msg="StartContainer for \"51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2\"" Jan 28 00:02:36.532074 containerd[1582]: time="2026-01-28T00:02:36.532035617Z" level=info msg="connecting to shim 51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2" address="unix:///run/containerd/s/a40847aefabfe2be05e198cae4200b1ad334f57a3776dfda60d2129b73ef0fb0" protocol=ttrpc version=3 Jan 28 00:02:36.566959 systemd[1]: Started cri-containerd-51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2.scope - libcontainer container 51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2. Jan 28 00:02:36.585107 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 28 00:02:36.585217 kernel: audit: type=1334 audit(1769558556.582:550): prog-id=161 op=LOAD Jan 28 00:02:36.582000 audit: BPF prog-id=161 op=LOAD Jan 28 00:02:36.589190 kernel: audit: type=1334 audit(1769558556.584:551): prog-id=162 op=LOAD Jan 28 00:02:36.589321 kernel: audit: type=1300 audit(1769558556.584:551): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.584000 audit: BPF prog-id=162 op=LOAD Jan 28 00:02:36.584000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.592046 kernel: audit: type=1327 audit(1769558556.584:551): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.584000 audit: BPF prog-id=162 op=UNLOAD Jan 28 00:02:36.592937 kernel: audit: type=1334 audit(1769558556.584:552): prog-id=162 op=UNLOAD Jan 28 00:02:36.584000 audit[3302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.599498 kernel: audit: type=1300 audit(1769558556.584:552): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.599710 kernel: audit: type=1327 audit(1769558556.584:552): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.584000 audit: BPF prog-id=163 op=LOAD Jan 28 00:02:36.601478 kernel: audit: type=1334 audit(1769558556.584:553): prog-id=163 op=LOAD Jan 28 00:02:36.604030 kernel: audit: type=1300 audit(1769558556.584:553): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.584000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.607833 kernel: audit: type=1327 audit(1769558556.584:553): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.585000 audit: BPF prog-id=164 op=LOAD Jan 28 00:02:36.585000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.588000 audit: BPF prog-id=164 op=UNLOAD Jan 28 00:02:36.588000 audit[3302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.588000 audit: BPF prog-id=163 op=UNLOAD Jan 28 00:02:36.588000 audit[3302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.588000 audit: BPF prog-id=165 op=LOAD Jan 28 00:02:36.588000 audit[3302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3172 pid=3302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:36.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3531646537353830303835643039353665353234353437653161306261 Jan 28 00:02:36.642322 containerd[1582]: time="2026-01-28T00:02:36.642259265Z" level=info msg="StartContainer for \"51de7580085d0956e524547e1a0ba72769c84025569c9c3467411c96276effe2\" returns successfully" Jan 28 00:02:37.106151 kubelet[2747]: E0128 00:02:37.106061 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:37.337835 kubelet[2747]: I0128 00:02:37.337634 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65f558cf8d-vx4dv" podStartSLOduration=1.769670293 podStartE2EDuration="4.337381162s" podCreationTimestamp="2026-01-28 00:02:33 +0000 UTC" firstStartedPulling="2026-01-28 00:02:33.909226265 +0000 UTC m=+33.936109184" lastFinishedPulling="2026-01-28 00:02:36.476937134 +0000 UTC m=+36.503820053" observedRunningTime="2026-01-28 00:02:37.308426953 +0000 UTC m=+37.335309872" watchObservedRunningTime="2026-01-28 00:02:37.337381162 +0000 UTC m=+37.364264081" Jan 28 00:02:37.343468 kubelet[2747]: E0128 00:02:37.343252 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.343468 kubelet[2747]: W0128 00:02:37.343285 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.344261 kubelet[2747]: E0128 00:02:37.343804 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.345075 kubelet[2747]: E0128 00:02:37.345046 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.345543 kubelet[2747]: W0128 00:02:37.345235 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.345543 kubelet[2747]: E0128 00:02:37.345405 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.347056 kubelet[2747]: E0128 00:02:37.346909 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.347056 kubelet[2747]: W0128 00:02:37.346933 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.347056 kubelet[2747]: E0128 00:02:37.346955 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.347986 kubelet[2747]: E0128 00:02:37.347954 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.347986 kubelet[2747]: W0128 00:02:37.347977 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.348078 kubelet[2747]: E0128 00:02:37.348012 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.348295 kubelet[2747]: E0128 00:02:37.348268 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.348295 kubelet[2747]: W0128 00:02:37.348284 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.348295 kubelet[2747]: E0128 00:02:37.348296 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.348515 kubelet[2747]: E0128 00:02:37.348467 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.348515 kubelet[2747]: W0128 00:02:37.348508 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.348569 kubelet[2747]: E0128 00:02:37.348519 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.349801 kubelet[2747]: E0128 00:02:37.349740 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.349801 kubelet[2747]: W0128 00:02:37.349791 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.349934 kubelet[2747]: E0128 00:02:37.349815 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.350139 kubelet[2747]: E0128 00:02:37.350109 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.350139 kubelet[2747]: W0128 00:02:37.350127 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.350139 kubelet[2747]: E0128 00:02:37.350139 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.350421 kubelet[2747]: E0128 00:02:37.350394 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.350465 kubelet[2747]: W0128 00:02:37.350424 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.350465 kubelet[2747]: E0128 00:02:37.350438 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.350672 kubelet[2747]: E0128 00:02:37.350646 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.350672 kubelet[2747]: W0128 00:02:37.350662 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.350772 kubelet[2747]: E0128 00:02:37.350678 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.350969 kubelet[2747]: E0128 00:02:37.350949 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.350969 kubelet[2747]: W0128 00:02:37.350965 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.351050 kubelet[2747]: E0128 00:02:37.350976 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.351139 kubelet[2747]: E0128 00:02:37.351119 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.351174 kubelet[2747]: W0128 00:02:37.351149 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.351174 kubelet[2747]: E0128 00:02:37.351160 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.351321 kubelet[2747]: E0128 00:02:37.351304 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.351321 kubelet[2747]: W0128 00:02:37.351319 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.351387 kubelet[2747]: E0128 00:02:37.351327 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.351665 kubelet[2747]: E0128 00:02:37.351641 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.351665 kubelet[2747]: W0128 00:02:37.351660 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.351806 kubelet[2747]: E0128 00:02:37.351674 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.351919 kubelet[2747]: E0128 00:02:37.351893 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.351919 kubelet[2747]: W0128 00:02:37.351912 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.352661 kubelet[2747]: E0128 00:02:37.351922 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.352661 kubelet[2747]: E0128 00:02:37.352304 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.352661 kubelet[2747]: W0128 00:02:37.352319 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.352661 kubelet[2747]: E0128 00:02:37.352329 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.352661 kubelet[2747]: E0128 00:02:37.352646 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.352661 kubelet[2747]: W0128 00:02:37.352657 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.352806 kubelet[2747]: E0128 00:02:37.352676 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.352902 kubelet[2747]: E0128 00:02:37.352874 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.352902 kubelet[2747]: W0128 00:02:37.352893 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.352962 kubelet[2747]: E0128 00:02:37.352912 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.353144 kubelet[2747]: E0128 00:02:37.353118 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.353144 kubelet[2747]: W0128 00:02:37.353137 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.353207 kubelet[2747]: E0128 00:02:37.353157 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.353332 kubelet[2747]: E0128 00:02:37.353309 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.353332 kubelet[2747]: W0128 00:02:37.353325 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.353390 kubelet[2747]: E0128 00:02:37.353337 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.354530 kubelet[2747]: E0128 00:02:37.353861 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.354530 kubelet[2747]: W0128 00:02:37.353885 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.354530 kubelet[2747]: E0128 00:02:37.353914 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.355038 kubelet[2747]: E0128 00:02:37.354875 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.355038 kubelet[2747]: W0128 00:02:37.354892 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.355038 kubelet[2747]: E0128 00:02:37.354926 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.355633 kubelet[2747]: E0128 00:02:37.355222 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.355633 kubelet[2747]: W0128 00:02:37.355242 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.355633 kubelet[2747]: E0128 00:02:37.355260 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.355633 kubelet[2747]: E0128 00:02:37.355448 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.355633 kubelet[2747]: W0128 00:02:37.355474 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.355633 kubelet[2747]: E0128 00:02:37.355553 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.355857 kubelet[2747]: E0128 00:02:37.355723 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.355857 kubelet[2747]: W0128 00:02:37.355733 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.355857 kubelet[2747]: E0128 00:02:37.355813 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.355980 kubelet[2747]: E0128 00:02:37.355952 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.355980 kubelet[2747]: W0128 00:02:37.355970 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.356032 kubelet[2747]: E0128 00:02:37.355984 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.356227 kubelet[2747]: E0128 00:02:37.356134 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.356227 kubelet[2747]: W0128 00:02:37.356148 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.356227 kubelet[2747]: E0128 00:02:37.356156 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.356333 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.359632 kubelet[2747]: W0128 00:02:37.356348 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.356371 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.356576 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.359632 kubelet[2747]: W0128 00:02:37.356585 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.356650 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.356938 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.359632 kubelet[2747]: W0128 00:02:37.356952 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.356962 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.359632 kubelet[2747]: E0128 00:02:37.357525 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.360086 kubelet[2747]: W0128 00:02:37.357539 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.360086 kubelet[2747]: E0128 00:02:37.357556 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.360086 kubelet[2747]: E0128 00:02:37.357948 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.360086 kubelet[2747]: W0128 00:02:37.357975 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.360086 kubelet[2747]: E0128 00:02:37.357990 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.360086 kubelet[2747]: E0128 00:02:37.358156 2747 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 00:02:37.360086 kubelet[2747]: W0128 00:02:37.358163 2747 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 00:02:37.360086 kubelet[2747]: E0128 00:02:37.358174 2747 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 00:02:37.388000 audit[3385]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:37.388000 audit[3385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd9ebb750 a2=0 a3=1 items=0 ppid=2849 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:37.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:37.391000 audit[3385]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3385 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:37.391000 audit[3385]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffd9ebb750 a2=0 a3=1 items=0 ppid=2849 pid=3385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:37.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:38.040631 containerd[1582]: time="2026-01-28T00:02:38.040478958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:38.042577 containerd[1582]: time="2026-01-28T00:02:38.042495684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:38.044751 containerd[1582]: time="2026-01-28T00:02:38.044612136Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:38.048213 containerd[1582]: time="2026-01-28T00:02:38.047928303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:38.048878 containerd[1582]: time="2026-01-28T00:02:38.048811798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.568513404s" Jan 28 00:02:38.048948 containerd[1582]: time="2026-01-28T00:02:38.048910245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 28 00:02:38.055680 containerd[1582]: time="2026-01-28T00:02:38.055243760Z" level=info msg="CreateContainer within sandbox \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 00:02:38.071726 containerd[1582]: time="2026-01-28T00:02:38.071660904Z" level=info msg="Container 9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:38.082891 containerd[1582]: time="2026-01-28T00:02:38.082804960Z" level=info msg="CreateContainer within sandbox \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f\"" Jan 28 00:02:38.084126 containerd[1582]: time="2026-01-28T00:02:38.084054678Z" level=info msg="StartContainer for \"9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f\"" Jan 28 00:02:38.089779 containerd[1582]: time="2026-01-28T00:02:38.089582703Z" level=info msg="connecting to shim 9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f" address="unix:///run/containerd/s/b8945374d275a1b84d6f7608098699d1ce08532745ce208446fbf94eee02bcfc" protocol=ttrpc version=3 Jan 28 00:02:38.123941 systemd[1]: Started cri-containerd-9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f.scope - libcontainer container 9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f. Jan 28 00:02:38.175000 audit: BPF prog-id=166 op=LOAD Jan 28 00:02:38.175000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3245 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:38.175000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966336662646437386361623234633633316631633664343336613032 Jan 28 00:02:38.176000 audit: BPF prog-id=167 op=LOAD Jan 28 00:02:38.176000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3245 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:38.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966336662646437386361623234633633316631633664343336613032 Jan 28 00:02:38.176000 audit: BPF prog-id=167 op=UNLOAD Jan 28 00:02:38.176000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:38.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966336662646437386361623234633633316631633664343336613032 Jan 28 00:02:38.176000 audit: BPF prog-id=166 op=UNLOAD Jan 28 00:02:38.176000 audit[3390]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:38.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966336662646437386361623234633633316631633664343336613032 Jan 28 00:02:38.176000 audit: BPF prog-id=168 op=LOAD Jan 28 00:02:38.176000 audit[3390]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3245 pid=3390 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:38.176000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966336662646437386361623234633633316631633664343336613032 Jan 28 00:02:38.211049 containerd[1582]: time="2026-01-28T00:02:38.210959397Z" level=info msg="StartContainer for \"9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f\" returns successfully" Jan 28 00:02:38.229219 systemd[1]: cri-containerd-9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f.scope: Deactivated successfully. Jan 28 00:02:38.231000 audit: BPF prog-id=168 op=UNLOAD Jan 28 00:02:38.235521 containerd[1582]: time="2026-01-28T00:02:38.235449846Z" level=info msg="received container exit event container_id:\"9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f\" id:\"9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f\" pid:3403 exited_at:{seconds:1769558558 nanos:235008178}" Jan 28 00:02:38.264113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f3fbdd78cab24c631f1c6d436a02c76b1b404f3efb67fd596aa93d60ad1f30f-rootfs.mount: Deactivated successfully. Jan 28 00:02:39.106986 kubelet[2747]: E0128 00:02:39.106414 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:39.301405 containerd[1582]: time="2026-01-28T00:02:39.300996855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 00:02:41.106468 kubelet[2747]: E0128 00:02:41.106068 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:42.063649 containerd[1582]: time="2026-01-28T00:02:42.063561238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:42.065533 containerd[1582]: time="2026-01-28T00:02:42.065315018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 28 00:02:42.068578 containerd[1582]: time="2026-01-28T00:02:42.068527483Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:42.071974 containerd[1582]: time="2026-01-28T00:02:42.071862955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:42.073282 containerd[1582]: time="2026-01-28T00:02:42.073226873Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.772161614s" Jan 28 00:02:42.073471 containerd[1582]: time="2026-01-28T00:02:42.073444726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 28 00:02:42.078328 containerd[1582]: time="2026-01-28T00:02:42.078274563Z" level=info msg="CreateContainer within sandbox \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 00:02:42.097935 containerd[1582]: time="2026-01-28T00:02:42.097847448Z" level=info msg="Container 698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:42.115394 containerd[1582]: time="2026-01-28T00:02:42.115307772Z" level=info msg="CreateContainer within sandbox \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6\"" Jan 28 00:02:42.116183 containerd[1582]: time="2026-01-28T00:02:42.116147860Z" level=info msg="StartContainer for \"698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6\"" Jan 28 00:02:42.119620 containerd[1582]: time="2026-01-28T00:02:42.119550656Z" level=info msg="connecting to shim 698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6" address="unix:///run/containerd/s/b8945374d275a1b84d6f7608098699d1ce08532745ce208446fbf94eee02bcfc" protocol=ttrpc version=3 Jan 28 00:02:42.147849 systemd[1]: Started cri-containerd-698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6.scope - libcontainer container 698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6. Jan 28 00:02:42.195000 audit: BPF prog-id=169 op=LOAD Jan 28 00:02:42.197419 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 28 00:02:42.197518 kernel: audit: type=1334 audit(1769558562.195:566): prog-id=169 op=LOAD Jan 28 00:02:42.195000 audit[3449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.201719 kernel: audit: type=1300 audit(1769558562.195:566): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.205736 kernel: audit: type=1327 audit(1769558562.195:566): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.195000 audit: BPF prog-id=170 op=LOAD Jan 28 00:02:42.206993 kernel: audit: type=1334 audit(1769558562.195:567): prog-id=170 op=LOAD Jan 28 00:02:42.210350 kernel: audit: type=1300 audit(1769558562.195:567): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit[3449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.214858 kernel: audit: type=1327 audit(1769558562.195:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.195000 audit: BPF prog-id=170 op=UNLOAD Jan 28 00:02:42.216855 kernel: audit: type=1334 audit(1769558562.195:568): prog-id=170 op=UNLOAD Jan 28 00:02:42.217619 kernel: audit: type=1300 audit(1769558562.195:568): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit[3449]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.223761 kernel: audit: type=1327 audit(1769558562.195:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.223892 kernel: audit: type=1334 audit(1769558562.195:569): prog-id=169 op=UNLOAD Jan 28 00:02:42.195000 audit: BPF prog-id=169 op=UNLOAD Jan 28 00:02:42.195000 audit[3449]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.195000 audit: BPF prog-id=171 op=LOAD Jan 28 00:02:42.195000 audit[3449]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3245 pid=3449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:42.195000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3639386662636335376161393337613661386239616361666135656131 Jan 28 00:02:42.245615 containerd[1582]: time="2026-01-28T00:02:42.243548422Z" level=info msg="StartContainer for \"698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6\" returns successfully" Jan 28 00:02:42.855859 containerd[1582]: time="2026-01-28T00:02:42.855809132Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:02:42.860610 systemd[1]: cri-containerd-698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6.scope: Deactivated successfully. Jan 28 00:02:42.861021 systemd[1]: cri-containerd-698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6.scope: Consumed 578ms CPU time, 188.2M memory peak, 165.9M written to disk. Jan 28 00:02:42.864023 containerd[1582]: time="2026-01-28T00:02:42.863868315Z" level=info msg="received container exit event container_id:\"698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6\" id:\"698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6\" pid:3462 exited_at:{seconds:1769558562 nanos:862559640}" Jan 28 00:02:42.865000 audit: BPF prog-id=171 op=UNLOAD Jan 28 00:02:42.895567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-698fbcc57aa937a6a8b9acafa5ea1716ac60437df91b691a5ea29906830ef9a6-rootfs.mount: Deactivated successfully. Jan 28 00:02:42.959078 kubelet[2747]: I0128 00:02:42.959028 2747 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 00:02:43.031354 systemd[1]: Created slice kubepods-burstable-pod075de7ad_4943_4b28_b59b_6c8de5997c15.slice - libcontainer container kubepods-burstable-pod075de7ad_4943_4b28_b59b_6c8de5997c15.slice. Jan 28 00:02:43.059222 systemd[1]: Created slice kubepods-burstable-podc920f575_603c_483a_9607_ef10e4a56793.slice - libcontainer container kubepods-burstable-podc920f575_603c_483a_9607_ef10e4a56793.slice. Jan 28 00:02:43.067641 kubelet[2747]: W0128 00:02:43.066196 2747 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:ci-4593-0-0-n-20383d5ef7" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object Jan 28 00:02:43.067641 kubelet[2747]: E0128 00:02:43.066844 2747 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" logger="UnhandledError" Jan 28 00:02:43.067641 kubelet[2747]: W0128 00:02:43.067067 2747 reflector.go:569] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:ci-4593-0-0-n-20383d5ef7" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object Jan 28 00:02:43.067872 kubelet[2747]: E0128 00:02:43.067578 2747 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" logger="UnhandledError" Jan 28 00:02:43.071658 kubelet[2747]: W0128 00:02:43.069506 2747 reflector.go:569] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: configmaps "goldmane-ca-bundle" is forbidden: User "system:node:ci-4593-0-0-n-20383d5ef7" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object Jan 28 00:02:43.071658 kubelet[2747]: E0128 00:02:43.069612 2747 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane-ca-bundle\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" logger="UnhandledError" Jan 28 00:02:43.071658 kubelet[2747]: W0128 00:02:43.069690 2747 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4593-0-0-n-20383d5ef7" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object Jan 28 00:02:43.071658 kubelet[2747]: E0128 00:02:43.069706 2747 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" logger="UnhandledError" Jan 28 00:02:43.073785 kubelet[2747]: I0128 00:02:43.073177 2747 status_manager.go:890] "Failed to get status for pod" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" err="pods \"calico-apiserver-697c7bd8db-7vcbz\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" Jan 28 00:02:43.078292 systemd[1]: Created slice kubepods-besteffort-pod2165c40d_a709_4e3d_baf9_8e91747af847.slice - libcontainer container kubepods-besteffort-pod2165c40d_a709_4e3d_baf9_8e91747af847.slice. Jan 28 00:02:43.087557 kubelet[2747]: W0128 00:02:43.086328 2747 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4593-0-0-n-20383d5ef7" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object Jan 28 00:02:43.088540 systemd[1]: Created slice kubepods-besteffort-pod137246fc_f131_4552_a311_34e5752765be.slice - libcontainer container kubepods-besteffort-pod137246fc_f131_4552_a311_34e5752765be.slice. Jan 28 00:02:43.090508 kubelet[2747]: E0128 00:02:43.088915 2747 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4593-0-0-n-20383d5ef7\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4593-0-0-n-20383d5ef7' and this object" logger="UnhandledError" Jan 28 00:02:43.102770 systemd[1]: Created slice kubepods-besteffort-pod1d007c43_f99e_43db_8dd1_f5d56f04a788.slice - libcontainer container kubepods-besteffort-pod1d007c43_f99e_43db_8dd1_f5d56f04a788.slice. Jan 28 00:02:43.106414 kubelet[2747]: I0128 00:02:43.106243 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-ca-bundle\") pod \"whisker-64ccb5c8b-29vjq\" (UID: \"2165c40d-a709-4e3d-baf9-8e91747af847\") " pod="calico-system/whisker-64ccb5c8b-29vjq" Jan 28 00:02:43.106414 kubelet[2747]: I0128 00:02:43.106318 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchv6\" (UniqueName: \"kubernetes.io/projected/075de7ad-4943-4b28-b59b-6c8de5997c15-kube-api-access-dchv6\") pod \"coredns-668d6bf9bc-hqlk4\" (UID: \"075de7ad-4943-4b28-b59b-6c8de5997c15\") " pod="kube-system/coredns-668d6bf9bc-hqlk4" Jan 28 00:02:43.106414 kubelet[2747]: I0128 00:02:43.106339 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/07cd90a0-de7e-4c03-9b09-e4adb1ab3e71-calico-apiserver-certs\") pod \"calico-apiserver-697c7bd8db-7vcbz\" (UID: \"07cd90a0-de7e-4c03-9b09-e4adb1ab3e71\") " pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" Jan 28 00:02:43.106414 kubelet[2747]: I0128 00:02:43.106359 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bec7677b-1e96-4176-929f-6c7596f75411-goldmane-ca-bundle\") pod \"goldmane-666569f655-h6h5c\" (UID: \"bec7677b-1e96-4176-929f-6c7596f75411\") " pod="calico-system/goldmane-666569f655-h6h5c" Jan 28 00:02:43.106414 kubelet[2747]: I0128 00:02:43.106380 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bec7677b-1e96-4176-929f-6c7596f75411-config\") pod \"goldmane-666569f655-h6h5c\" (UID: \"bec7677b-1e96-4176-929f-6c7596f75411\") " pod="calico-system/goldmane-666569f655-h6h5c" Jan 28 00:02:43.107284 kubelet[2747]: I0128 00:02:43.106399 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/075de7ad-4943-4b28-b59b-6c8de5997c15-config-volume\") pod \"coredns-668d6bf9bc-hqlk4\" (UID: \"075de7ad-4943-4b28-b59b-6c8de5997c15\") " pod="kube-system/coredns-668d6bf9bc-hqlk4" Jan 28 00:02:43.107284 kubelet[2747]: I0128 00:02:43.106416 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-backend-key-pair\") pod \"whisker-64ccb5c8b-29vjq\" (UID: \"2165c40d-a709-4e3d-baf9-8e91747af847\") " pod="calico-system/whisker-64ccb5c8b-29vjq" Jan 28 00:02:43.107284 kubelet[2747]: I0128 00:02:43.106433 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67bs\" (UniqueName: \"kubernetes.io/projected/07cd90a0-de7e-4c03-9b09-e4adb1ab3e71-kube-api-access-l67bs\") pod \"calico-apiserver-697c7bd8db-7vcbz\" (UID: \"07cd90a0-de7e-4c03-9b09-e4adb1ab3e71\") " pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" Jan 28 00:02:43.107284 kubelet[2747]: I0128 00:02:43.106455 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d007c43-f99e-43db-8dd1-f5d56f04a788-tigera-ca-bundle\") pod \"calico-kube-controllers-74bd894b84-mmqcv\" (UID: \"1d007c43-f99e-43db-8dd1-f5d56f04a788\") " pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" Jan 28 00:02:43.107284 kubelet[2747]: I0128 00:02:43.106474 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pfsv\" (UniqueName: \"kubernetes.io/projected/2165c40d-a709-4e3d-baf9-8e91747af847-kube-api-access-4pfsv\") pod \"whisker-64ccb5c8b-29vjq\" (UID: \"2165c40d-a709-4e3d-baf9-8e91747af847\") " pod="calico-system/whisker-64ccb5c8b-29vjq" Jan 28 00:02:43.107412 kubelet[2747]: I0128 00:02:43.106495 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/137246fc-f131-4552-a311-34e5752765be-calico-apiserver-certs\") pod \"calico-apiserver-697c7bd8db-5lsmb\" (UID: \"137246fc-f131-4552-a311-34e5752765be\") " pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" Jan 28 00:02:43.107412 kubelet[2747]: I0128 00:02:43.106517 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxrkb\" (UniqueName: \"kubernetes.io/projected/1d007c43-f99e-43db-8dd1-f5d56f04a788-kube-api-access-vxrkb\") pod \"calico-kube-controllers-74bd894b84-mmqcv\" (UID: \"1d007c43-f99e-43db-8dd1-f5d56f04a788\") " pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" Jan 28 00:02:43.107412 kubelet[2747]: I0128 00:02:43.106536 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r599k\" (UniqueName: \"kubernetes.io/projected/137246fc-f131-4552-a311-34e5752765be-kube-api-access-r599k\") pod \"calico-apiserver-697c7bd8db-5lsmb\" (UID: \"137246fc-f131-4552-a311-34e5752765be\") " pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" Jan 28 00:02:43.107412 kubelet[2747]: I0128 00:02:43.106554 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwsg\" (UniqueName: \"kubernetes.io/projected/c920f575-603c-483a-9607-ef10e4a56793-kube-api-access-zpwsg\") pod \"coredns-668d6bf9bc-ckpk7\" (UID: \"c920f575-603c-483a-9607-ef10e4a56793\") " pod="kube-system/coredns-668d6bf9bc-ckpk7" Jan 28 00:02:43.107412 kubelet[2747]: I0128 00:02:43.106572 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bec7677b-1e96-4176-929f-6c7596f75411-goldmane-key-pair\") pod \"goldmane-666569f655-h6h5c\" (UID: \"bec7677b-1e96-4176-929f-6c7596f75411\") " pod="calico-system/goldmane-666569f655-h6h5c" Jan 28 00:02:43.109634 kubelet[2747]: I0128 00:02:43.108910 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52vxw\" (UniqueName: \"kubernetes.io/projected/bec7677b-1e96-4176-929f-6c7596f75411-kube-api-access-52vxw\") pod \"goldmane-666569f655-h6h5c\" (UID: \"bec7677b-1e96-4176-929f-6c7596f75411\") " pod="calico-system/goldmane-666569f655-h6h5c" Jan 28 00:02:43.109634 kubelet[2747]: I0128 00:02:43.109087 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c920f575-603c-483a-9607-ef10e4a56793-config-volume\") pod \"coredns-668d6bf9bc-ckpk7\" (UID: \"c920f575-603c-483a-9607-ef10e4a56793\") " pod="kube-system/coredns-668d6bf9bc-ckpk7" Jan 28 00:02:43.123417 systemd[1]: Created slice kubepods-besteffort-pod07cd90a0_de7e_4c03_9b09_e4adb1ab3e71.slice - libcontainer container kubepods-besteffort-pod07cd90a0_de7e_4c03_9b09_e4adb1ab3e71.slice. Jan 28 00:02:43.138690 systemd[1]: Created slice kubepods-besteffort-podbec7677b_1e96_4176_929f_6c7596f75411.slice - libcontainer container kubepods-besteffort-podbec7677b_1e96_4176_929f_6c7596f75411.slice. Jan 28 00:02:43.150921 systemd[1]: Created slice kubepods-besteffort-pod2955546a_cb98_4307_9f9a_44877b3e7017.slice - libcontainer container kubepods-besteffort-pod2955546a_cb98_4307_9f9a_44877b3e7017.slice. Jan 28 00:02:43.158556 containerd[1582]: time="2026-01-28T00:02:43.158473363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6mgn,Uid:2955546a-cb98-4307-9f9a-44877b3e7017,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:43.325699 containerd[1582]: time="2026-01-28T00:02:43.324846311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 00:02:43.361530 containerd[1582]: time="2026-01-28T00:02:43.361054674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqlk4,Uid:075de7ad-4943-4b28-b59b-6c8de5997c15,Namespace:kube-system,Attempt:0,}" Jan 28 00:02:43.368194 containerd[1582]: time="2026-01-28T00:02:43.368126713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ckpk7,Uid:c920f575-603c-483a-9607-ef10e4a56793,Namespace:kube-system,Attempt:0,}" Jan 28 00:02:43.377763 containerd[1582]: time="2026-01-28T00:02:43.377683732Z" level=error msg="Failed to destroy network for sandbox \"6e1a018916a86bde237cf18aa3e3c12a4bbdf5439b44a1e8e6971c67b04e173a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.385908 containerd[1582]: time="2026-01-28T00:02:43.385839953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64ccb5c8b-29vjq,Uid:2165c40d-a709-4e3d-baf9-8e91747af847,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:43.391857 containerd[1582]: time="2026-01-28T00:02:43.391786208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6mgn,Uid:2955546a-cb98-4307-9f9a-44877b3e7017,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a018916a86bde237cf18aa3e3c12a4bbdf5439b44a1e8e6971c67b04e173a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.394175 kubelet[2747]: E0128 00:02:43.392640 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a018916a86bde237cf18aa3e3c12a4bbdf5439b44a1e8e6971c67b04e173a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.394175 kubelet[2747]: E0128 00:02:43.392731 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a018916a86bde237cf18aa3e3c12a4bbdf5439b44a1e8e6971c67b04e173a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:43.394175 kubelet[2747]: E0128 00:02:43.392752 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e1a018916a86bde237cf18aa3e3c12a4bbdf5439b44a1e8e6971c67b04e173a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w6mgn" Jan 28 00:02:43.394406 kubelet[2747]: E0128 00:02:43.392794 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e1a018916a86bde237cf18aa3e3c12a4bbdf5439b44a1e8e6971c67b04e173a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:43.418705 containerd[1582]: time="2026-01-28T00:02:43.418646924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74bd894b84-mmqcv,Uid:1d007c43-f99e-43db-8dd1-f5d56f04a788,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:43.528803 containerd[1582]: time="2026-01-28T00:02:43.528744937Z" level=error msg="Failed to destroy network for sandbox \"fe7b4ccb19127f24a7546b0e5a753b6a167fb7e323b0f291b07540f0b986d61a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.531382 containerd[1582]: time="2026-01-28T00:02:43.530569480Z" level=error msg="Failed to destroy network for sandbox \"d1a853aac5ceb1fe41432b1d9fe1ad93c1faf6f5802c1ffb12a3f579f1764ac3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.540662 containerd[1582]: time="2026-01-28T00:02:43.540463558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqlk4,Uid:075de7ad-4943-4b28-b59b-6c8de5997c15,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a853aac5ceb1fe41432b1d9fe1ad93c1faf6f5802c1ffb12a3f579f1764ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.541881 kubelet[2747]: E0128 00:02:43.541055 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a853aac5ceb1fe41432b1d9fe1ad93c1faf6f5802c1ffb12a3f579f1764ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.541881 kubelet[2747]: E0128 00:02:43.541133 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a853aac5ceb1fe41432b1d9fe1ad93c1faf6f5802c1ffb12a3f579f1764ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqlk4" Jan 28 00:02:43.541881 kubelet[2747]: E0128 00:02:43.541155 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1a853aac5ceb1fe41432b1d9fe1ad93c1faf6f5802c1ffb12a3f579f1764ac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqlk4" Jan 28 00:02:43.543658 kubelet[2747]: E0128 00:02:43.541203 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hqlk4_kube-system(075de7ad-4943-4b28-b59b-6c8de5997c15)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hqlk4_kube-system(075de7ad-4943-4b28-b59b-6c8de5997c15)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1a853aac5ceb1fe41432b1d9fe1ad93c1faf6f5802c1ffb12a3f579f1764ac3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hqlk4" podUID="075de7ad-4943-4b28-b59b-6c8de5997c15" Jan 28 00:02:43.543658 kubelet[2747]: E0128 00:02:43.542662 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7b4ccb19127f24a7546b0e5a753b6a167fb7e323b0f291b07540f0b986d61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.543658 kubelet[2747]: E0128 00:02:43.542757 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7b4ccb19127f24a7546b0e5a753b6a167fb7e323b0f291b07540f0b986d61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ckpk7" Jan 28 00:02:43.543782 containerd[1582]: time="2026-01-28T00:02:43.542394667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ckpk7,Uid:c920f575-603c-483a-9607-ef10e4a56793,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7b4ccb19127f24a7546b0e5a753b6a167fb7e323b0f291b07540f0b986d61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.543836 kubelet[2747]: E0128 00:02:43.542793 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7b4ccb19127f24a7546b0e5a753b6a167fb7e323b0f291b07540f0b986d61a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-ckpk7" Jan 28 00:02:43.543836 kubelet[2747]: E0128 00:02:43.542832 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ckpk7_kube-system(c920f575-603c-483a-9607-ef10e4a56793)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ckpk7_kube-system(c920f575-603c-483a-9607-ef10e4a56793)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe7b4ccb19127f24a7546b0e5a753b6a167fb7e323b0f291b07540f0b986d61a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-ckpk7" podUID="c920f575-603c-483a-9607-ef10e4a56793" Jan 28 00:02:43.551843 containerd[1582]: time="2026-01-28T00:02:43.551692512Z" level=error msg="Failed to destroy network for sandbox \"82e1ff4e5100248dbc07c799303a7a73cbecb2174af172ebae1fc773de31799f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.557859 containerd[1582]: time="2026-01-28T00:02:43.557761094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64ccb5c8b-29vjq,Uid:2165c40d-a709-4e3d-baf9-8e91747af847,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e1ff4e5100248dbc07c799303a7a73cbecb2174af172ebae1fc773de31799f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.558150 kubelet[2747]: E0128 00:02:43.558065 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e1ff4e5100248dbc07c799303a7a73cbecb2174af172ebae1fc773de31799f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.558195 kubelet[2747]: E0128 00:02:43.558153 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e1ff4e5100248dbc07c799303a7a73cbecb2174af172ebae1fc773de31799f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64ccb5c8b-29vjq" Jan 28 00:02:43.558195 kubelet[2747]: E0128 00:02:43.558179 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82e1ff4e5100248dbc07c799303a7a73cbecb2174af172ebae1fc773de31799f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64ccb5c8b-29vjq" Jan 28 00:02:43.558247 kubelet[2747]: E0128 00:02:43.558218 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64ccb5c8b-29vjq_calico-system(2165c40d-a709-4e3d-baf9-8e91747af847)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64ccb5c8b-29vjq_calico-system(2165c40d-a709-4e3d-baf9-8e91747af847)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82e1ff4e5100248dbc07c799303a7a73cbecb2174af172ebae1fc773de31799f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64ccb5c8b-29vjq" podUID="2165c40d-a709-4e3d-baf9-8e91747af847" Jan 28 00:02:43.559620 containerd[1582]: time="2026-01-28T00:02:43.559546755Z" level=error msg="Failed to destroy network for sandbox \"24f38a660cf0d0739eb200645baf25cc51d3bda0c606effebdb61ef134be9d83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.563943 containerd[1582]: time="2026-01-28T00:02:43.563844917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74bd894b84-mmqcv,Uid:1d007c43-f99e-43db-8dd1-f5d56f04a788,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f38a660cf0d0739eb200645baf25cc51d3bda0c606effebdb61ef134be9d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.564388 kubelet[2747]: E0128 00:02:43.564346 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f38a660cf0d0739eb200645baf25cc51d3bda0c606effebdb61ef134be9d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:43.564534 kubelet[2747]: E0128 00:02:43.564514 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f38a660cf0d0739eb200645baf25cc51d3bda0c606effebdb61ef134be9d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" Jan 28 00:02:43.564663 kubelet[2747]: E0128 00:02:43.564632 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24f38a660cf0d0739eb200645baf25cc51d3bda0c606effebdb61ef134be9d83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" Jan 28 00:02:43.564828 kubelet[2747]: E0128 00:02:43.564794 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74bd894b84-mmqcv_calico-system(1d007c43-f99e-43db-8dd1-f5d56f04a788)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74bd894b84-mmqcv_calico-system(1d007c43-f99e-43db-8dd1-f5d56f04a788)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24f38a660cf0d0739eb200645baf25cc51d3bda0c606effebdb61ef134be9d83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:02:44.174894 systemd[1]: run-netns-cni\x2d3547b1c2\x2d6be6\x2d5f5e\x2d434a\x2d52ccd2a01899.mount: Deactivated successfully. Jan 28 00:02:44.212267 kubelet[2747]: E0128 00:02:44.211733 2747 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 28 00:02:44.212267 kubelet[2747]: E0128 00:02:44.211882 2747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/137246fc-f131-4552-a311-34e5752765be-calico-apiserver-certs podName:137246fc-f131-4552-a311-34e5752765be nodeName:}" failed. No retries permitted until 2026-01-28 00:02:44.711851276 +0000 UTC m=+44.738734235 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/137246fc-f131-4552-a311-34e5752765be-calico-apiserver-certs") pod "calico-apiserver-697c7bd8db-5lsmb" (UID: "137246fc-f131-4552-a311-34e5752765be") : failed to sync secret cache: timed out waiting for the condition Jan 28 00:02:44.215387 kubelet[2747]: E0128 00:02:44.214019 2747 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 28 00:02:44.215387 kubelet[2747]: E0128 00:02:44.214278 2747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/07cd90a0-de7e-4c03-9b09-e4adb1ab3e71-calico-apiserver-certs podName:07cd90a0-de7e-4c03-9b09-e4adb1ab3e71 nodeName:}" failed. No retries permitted until 2026-01-28 00:02:44.714238889 +0000 UTC m=+44.741121848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/07cd90a0-de7e-4c03-9b09-e4adb1ab3e71-calico-apiserver-certs") pod "calico-apiserver-697c7bd8db-7vcbz" (UID: "07cd90a0-de7e-4c03-9b09-e4adb1ab3e71") : failed to sync secret cache: timed out waiting for the condition Jan 28 00:02:44.215387 kubelet[2747]: E0128 00:02:44.214823 2747 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jan 28 00:02:44.215387 kubelet[2747]: E0128 00:02:44.215355 2747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bec7677b-1e96-4176-929f-6c7596f75411-config podName:bec7677b-1e96-4176-929f-6c7596f75411 nodeName:}" failed. No retries permitted until 2026-01-28 00:02:44.715325909 +0000 UTC m=+44.742208828 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/bec7677b-1e96-4176-929f-6c7596f75411-config") pod "goldmane-666569f655-h6h5c" (UID: "bec7677b-1e96-4176-929f-6c7596f75411") : failed to sync configmap cache: timed out waiting for the condition Jan 28 00:02:44.224387 kubelet[2747]: E0128 00:02:44.224099 2747 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 28 00:02:44.224387 kubelet[2747]: E0128 00:02:44.224210 2747 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bec7677b-1e96-4176-929f-6c7596f75411-goldmane-key-pair podName:bec7677b-1e96-4176-929f-6c7596f75411 nodeName:}" failed. No retries permitted until 2026-01-28 00:02:44.7241758 +0000 UTC m=+44.751058719 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/bec7677b-1e96-4176-929f-6c7596f75411-goldmane-key-pair") pod "goldmane-666569f655-h6h5c" (UID: "bec7677b-1e96-4176-929f-6c7596f75411") : failed to sync secret cache: timed out waiting for the condition Jan 28 00:02:44.912197 containerd[1582]: time="2026-01-28T00:02:44.912134626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-5lsmb,Uid:137246fc-f131-4552-a311-34e5752765be,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:02:44.938465 containerd[1582]: time="2026-01-28T00:02:44.938083625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-7vcbz,Uid:07cd90a0-de7e-4c03-9b09-e4adb1ab3e71,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:02:44.943816 containerd[1582]: time="2026-01-28T00:02:44.943683295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h6h5c,Uid:bec7677b-1e96-4176-929f-6c7596f75411,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:45.051727 containerd[1582]: time="2026-01-28T00:02:45.051655555Z" level=error msg="Failed to destroy network for sandbox \"59642e8210932a08f44d53c26396940e76941dff79366582e53ac5d2628ae66d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.059146 containerd[1582]: time="2026-01-28T00:02:45.059082840Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-5lsmb,Uid:137246fc-f131-4552-a311-34e5752765be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59642e8210932a08f44d53c26396940e76941dff79366582e53ac5d2628ae66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.062209 kubelet[2747]: E0128 00:02:45.059647 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59642e8210932a08f44d53c26396940e76941dff79366582e53ac5d2628ae66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.062209 kubelet[2747]: E0128 00:02:45.061699 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59642e8210932a08f44d53c26396940e76941dff79366582e53ac5d2628ae66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" Jan 28 00:02:45.062209 kubelet[2747]: E0128 00:02:45.061745 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59642e8210932a08f44d53c26396940e76941dff79366582e53ac5d2628ae66d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" Jan 28 00:02:45.063505 kubelet[2747]: E0128 00:02:45.061802 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-697c7bd8db-5lsmb_calico-apiserver(137246fc-f131-4552-a311-34e5752765be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-697c7bd8db-5lsmb_calico-apiserver(137246fc-f131-4552-a311-34e5752765be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59642e8210932a08f44d53c26396940e76941dff79366582e53ac5d2628ae66d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:02:45.086471 containerd[1582]: time="2026-01-28T00:02:45.086410290Z" level=error msg="Failed to destroy network for sandbox \"fb2d833ebac8a7bf1d395184c5f3fd040228b30613a37a3f32131e900e1022df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.090697 containerd[1582]: time="2026-01-28T00:02:45.090552036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h6h5c,Uid:bec7677b-1e96-4176-929f-6c7596f75411,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2d833ebac8a7bf1d395184c5f3fd040228b30613a37a3f32131e900e1022df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.091358 kubelet[2747]: E0128 00:02:45.091305 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2d833ebac8a7bf1d395184c5f3fd040228b30613a37a3f32131e900e1022df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.091538 kubelet[2747]: E0128 00:02:45.091514 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2d833ebac8a7bf1d395184c5f3fd040228b30613a37a3f32131e900e1022df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h6h5c" Jan 28 00:02:45.091674 kubelet[2747]: E0128 00:02:45.091654 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb2d833ebac8a7bf1d395184c5f3fd040228b30613a37a3f32131e900e1022df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-h6h5c" Jan 28 00:02:45.091803 kubelet[2747]: E0128 00:02:45.091775 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-h6h5c_calico-system(bec7677b-1e96-4176-929f-6c7596f75411)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-h6h5c_calico-system(bec7677b-1e96-4176-929f-6c7596f75411)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb2d833ebac8a7bf1d395184c5f3fd040228b30613a37a3f32131e900e1022df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:02:45.108144 containerd[1582]: time="2026-01-28T00:02:45.108008148Z" level=error msg="Failed to destroy network for sandbox \"7c53aace4af2f2f063f5a3b2cd96ada50ddbc0a269ad9922ae01c21fae45f58f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.116079 containerd[1582]: time="2026-01-28T00:02:45.115988543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-7vcbz,Uid:07cd90a0-de7e-4c03-9b09-e4adb1ab3e71,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c53aace4af2f2f063f5a3b2cd96ada50ddbc0a269ad9922ae01c21fae45f58f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.116854 kubelet[2747]: E0128 00:02:45.116799 2747 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c53aace4af2f2f063f5a3b2cd96ada50ddbc0a269ad9922ae01c21fae45f58f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 00:02:45.119714 kubelet[2747]: E0128 00:02:45.118767 2747 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c53aace4af2f2f063f5a3b2cd96ada50ddbc0a269ad9922ae01c21fae45f58f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" Jan 28 00:02:45.119714 kubelet[2747]: E0128 00:02:45.118817 2747 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c53aace4af2f2f063f5a3b2cd96ada50ddbc0a269ad9922ae01c21fae45f58f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" Jan 28 00:02:45.119714 kubelet[2747]: E0128 00:02:45.119195 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-697c7bd8db-7vcbz_calico-apiserver(07cd90a0-de7e-4c03-9b09-e4adb1ab3e71)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-697c7bd8db-7vcbz_calico-apiserver(07cd90a0-de7e-4c03-9b09-e4adb1ab3e71)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c53aace4af2f2f063f5a3b2cd96ada50ddbc0a269ad9922ae01c21fae45f58f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:02:45.170480 systemd[1]: run-netns-cni\x2d8aa44732\x2de8b5\x2d1e2f\x2df526\x2d47964b21bc1b.mount: Deactivated successfully. Jan 28 00:02:45.170632 systemd[1]: run-netns-cni\x2da48e08aa\x2d558a\x2d42f9\x2d8d04\x2defaa10ca6223.mount: Deactivated successfully. Jan 28 00:02:47.781295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2668266124.mount: Deactivated successfully. Jan 28 00:02:47.810317 containerd[1582]: time="2026-01-28T00:02:47.810196760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:47.810897 containerd[1582]: time="2026-01-28T00:02:47.810840434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 28 00:02:47.812390 containerd[1582]: time="2026-01-28T00:02:47.812307512Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:47.815725 containerd[1582]: time="2026-01-28T00:02:47.815642328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:02:47.816623 containerd[1582]: time="2026-01-28T00:02:47.816498293Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.4901847s" Jan 28 00:02:47.816623 containerd[1582]: time="2026-01-28T00:02:47.816541016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 28 00:02:47.836835 containerd[1582]: time="2026-01-28T00:02:47.836765565Z" level=info msg="CreateContainer within sandbox \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 00:02:47.864629 containerd[1582]: time="2026-01-28T00:02:47.864481910Z" level=info msg="Container 4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:47.881727 containerd[1582]: time="2026-01-28T00:02:47.881533931Z" level=info msg="CreateContainer within sandbox \"dcf9b46e77a330185851ca8bdeead31afe8f0fbb85a967263380e66390ba2085\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd\"" Jan 28 00:02:47.882744 containerd[1582]: time="2026-01-28T00:02:47.882663071Z" level=info msg="StartContainer for \"4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd\"" Jan 28 00:02:47.886911 containerd[1582]: time="2026-01-28T00:02:47.886800850Z" level=info msg="connecting to shim 4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd" address="unix:///run/containerd/s/b8945374d275a1b84d6f7608098699d1ce08532745ce208446fbf94eee02bcfc" protocol=ttrpc version=3 Jan 28 00:02:47.915911 systemd[1]: Started cri-containerd-4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd.scope - libcontainer container 4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd. Jan 28 00:02:47.983776 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 28 00:02:47.983979 kernel: audit: type=1334 audit(1769558567.978:572): prog-id=172 op=LOAD Jan 28 00:02:47.984006 kernel: audit: type=1300 audit(1769558567.978:572): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:47.978000 audit: BPF prog-id=172 op=LOAD Jan 28 00:02:47.978000 audit[3715]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:47.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:47.988639 kernel: audit: type=1327 audit(1769558567.978:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:47.978000 audit: BPF prog-id=173 op=LOAD Jan 28 00:02:47.978000 audit[3715]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:48.007617 kernel: audit: type=1334 audit(1769558567.978:573): prog-id=173 op=LOAD Jan 28 00:02:47.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:48.014383 kernel: audit: type=1300 audit(1769558567.978:573): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:48.014662 kernel: audit: type=1327 audit(1769558567.978:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:48.002000 audit: BPF prog-id=173 op=UNLOAD Jan 28 00:02:48.002000 audit[3715]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:48.016695 kernel: audit: type=1334 audit(1769558568.002:574): prog-id=173 op=UNLOAD Jan 28 00:02:48.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:48.021799 kernel: audit: type=1300 audit(1769558568.002:574): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:48.021926 kernel: audit: type=1327 audit(1769558568.002:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:48.023112 kernel: audit: type=1334 audit(1769558568.002:575): prog-id=172 op=UNLOAD Jan 28 00:02:48.002000 audit: BPF prog-id=172 op=UNLOAD Jan 28 00:02:48.002000 audit[3715]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:48.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:48.002000 audit: BPF prog-id=174 op=LOAD Jan 28 00:02:48.002000 audit[3715]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3245 pid=3715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:48.002000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3466613061353832333838663236353636666136626138326666366561 Jan 28 00:02:48.051981 containerd[1582]: time="2026-01-28T00:02:48.051706168Z" level=info msg="StartContainer for \"4fa0a582388f26566fa6ba82ff6eaecfa14b98548a06ef5845a5babd0c6fa2fd\" returns successfully" Jan 28 00:02:48.242786 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 00:02:48.242946 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 00:02:48.399421 kubelet[2747]: I0128 00:02:48.398512 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4lx2l" podStartSLOduration=1.6020021770000001 podStartE2EDuration="15.398487996s" podCreationTimestamp="2026-01-28 00:02:33 +0000 UTC" firstStartedPulling="2026-01-28 00:02:34.021077731 +0000 UTC m=+34.047960650" lastFinishedPulling="2026-01-28 00:02:47.81756355 +0000 UTC m=+47.844446469" observedRunningTime="2026-01-28 00:02:48.390876079 +0000 UTC m=+48.417758998" watchObservedRunningTime="2026-01-28 00:02:48.398487996 +0000 UTC m=+48.425370875" Jan 28 00:02:48.563119 kubelet[2747]: I0128 00:02:48.563056 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-ca-bundle\") pod \"2165c40d-a709-4e3d-baf9-8e91747af847\" (UID: \"2165c40d-a709-4e3d-baf9-8e91747af847\") " Jan 28 00:02:48.563438 kubelet[2747]: I0128 00:02:48.563275 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pfsv\" (UniqueName: \"kubernetes.io/projected/2165c40d-a709-4e3d-baf9-8e91747af847-kube-api-access-4pfsv\") pod \"2165c40d-a709-4e3d-baf9-8e91747af847\" (UID: \"2165c40d-a709-4e3d-baf9-8e91747af847\") " Jan 28 00:02:48.563438 kubelet[2747]: I0128 00:02:48.563406 2747 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-backend-key-pair\") pod \"2165c40d-a709-4e3d-baf9-8e91747af847\" (UID: \"2165c40d-a709-4e3d-baf9-8e91747af847\") " Jan 28 00:02:48.582633 kubelet[2747]: I0128 00:02:48.581435 2747 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2165c40d-a709-4e3d-baf9-8e91747af847" (UID: "2165c40d-a709-4e3d-baf9-8e91747af847"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 00:02:48.586020 kubelet[2747]: I0128 00:02:48.585969 2747 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2165c40d-a709-4e3d-baf9-8e91747af847" (UID: "2165c40d-a709-4e3d-baf9-8e91747af847"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 00:02:48.590679 kubelet[2747]: I0128 00:02:48.590012 2747 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2165c40d-a709-4e3d-baf9-8e91747af847-kube-api-access-4pfsv" (OuterVolumeSpecName: "kube-api-access-4pfsv") pod "2165c40d-a709-4e3d-baf9-8e91747af847" (UID: "2165c40d-a709-4e3d-baf9-8e91747af847"). InnerVolumeSpecName "kube-api-access-4pfsv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 00:02:48.664840 kubelet[2747]: I0128 00:02:48.664516 2747 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4pfsv\" (UniqueName: \"kubernetes.io/projected/2165c40d-a709-4e3d-baf9-8e91747af847-kube-api-access-4pfsv\") on node \"ci-4593-0-0-n-20383d5ef7\" DevicePath \"\"" Jan 28 00:02:48.664840 kubelet[2747]: I0128 00:02:48.664586 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-ca-bundle\") on node \"ci-4593-0-0-n-20383d5ef7\" DevicePath \"\"" Jan 28 00:02:48.664840 kubelet[2747]: I0128 00:02:48.664655 2747 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2165c40d-a709-4e3d-baf9-8e91747af847-whisker-backend-key-pair\") on node \"ci-4593-0-0-n-20383d5ef7\" DevicePath \"\"" Jan 28 00:02:48.781516 systemd[1]: var-lib-kubelet-pods-2165c40d\x2da709\x2d4e3d\x2dbaf9\x2d8e91747af847-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4pfsv.mount: Deactivated successfully. Jan 28 00:02:48.781686 systemd[1]: var-lib-kubelet-pods-2165c40d\x2da709\x2d4e3d\x2dbaf9\x2d8e91747af847-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 00:02:49.358787 systemd[1]: Removed slice kubepods-besteffort-pod2165c40d_a709_4e3d_baf9_8e91747af847.slice - libcontainer container kubepods-besteffort-pod2165c40d_a709_4e3d_baf9_8e91747af847.slice. Jan 28 00:02:49.462583 systemd[1]: Created slice kubepods-besteffort-pod28c2487d_b7dc_46cd_9e63_ec0ab10a8703.slice - libcontainer container kubepods-besteffort-pod28c2487d_b7dc_46cd_9e63_ec0ab10a8703.slice. Jan 28 00:02:49.572557 kubelet[2747]: I0128 00:02:49.572482 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/28c2487d-b7dc-46cd-9e63-ec0ab10a8703-whisker-backend-key-pair\") pod \"whisker-558b9cfcb8-hltp7\" (UID: \"28c2487d-b7dc-46cd-9e63-ec0ab10a8703\") " pod="calico-system/whisker-558b9cfcb8-hltp7" Jan 28 00:02:49.573636 kubelet[2747]: I0128 00:02:49.573466 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8965z\" (UniqueName: \"kubernetes.io/projected/28c2487d-b7dc-46cd-9e63-ec0ab10a8703-kube-api-access-8965z\") pod \"whisker-558b9cfcb8-hltp7\" (UID: \"28c2487d-b7dc-46cd-9e63-ec0ab10a8703\") " pod="calico-system/whisker-558b9cfcb8-hltp7" Jan 28 00:02:49.573636 kubelet[2747]: I0128 00:02:49.573677 2747 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28c2487d-b7dc-46cd-9e63-ec0ab10a8703-whisker-ca-bundle\") pod \"whisker-558b9cfcb8-hltp7\" (UID: \"28c2487d-b7dc-46cd-9e63-ec0ab10a8703\") " pod="calico-system/whisker-558b9cfcb8-hltp7" Jan 28 00:02:49.772371 containerd[1582]: time="2026-01-28T00:02:49.772319145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b9cfcb8-hltp7,Uid:28c2487d-b7dc-46cd-9e63-ec0ab10a8703,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:50.111659 systemd-networkd[1464]: cali481d435eb1f: Link UP Jan 28 00:02:50.115411 systemd-networkd[1464]: cali481d435eb1f: Gained carrier Jan 28 00:02:50.124247 kubelet[2747]: I0128 00:02:50.122539 2747 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2165c40d-a709-4e3d-baf9-8e91747af847" path="/var/lib/kubelet/pods/2165c40d-a709-4e3d-baf9-8e91747af847/volumes" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:49.825 [INFO][3832] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:49.901 [INFO][3832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0 whisker-558b9cfcb8- calico-system 28c2487d-b7dc-46cd-9e63-ec0ab10a8703 927 0 2026-01-28 00:02:49 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:558b9cfcb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 whisker-558b9cfcb8-hltp7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali481d435eb1f [] [] }} ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:49.901 [INFO][3832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.008 [INFO][3884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" HandleID="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Workload="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.009 [INFO][3884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" HandleID="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Workload="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"whisker-558b9cfcb8-hltp7", "timestamp":"2026-01-28 00:02:50.008835935 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.009 [INFO][3884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.009 [INFO][3884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.009 [INFO][3884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.024 [INFO][3884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.042 [INFO][3884] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.051 [INFO][3884] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.055 [INFO][3884] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.062 [INFO][3884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.062 [INFO][3884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.066 [INFO][3884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646 Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.073 [INFO][3884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.081 [INFO][3884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.193/26] block=192.168.117.192/26 handle="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.081 [INFO][3884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.193/26] handle="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:50.154706 containerd[1582]: 2026-01-28 00:02:50.081 [INFO][3884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:50.156996 containerd[1582]: 2026-01-28 00:02:50.082 [INFO][3884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.193/26] IPv6=[] ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" HandleID="k8s-pod-network.6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Workload="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.156996 containerd[1582]: 2026-01-28 00:02:50.086 [INFO][3832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0", GenerateName:"whisker-558b9cfcb8-", Namespace:"calico-system", SelfLink:"", UID:"28c2487d-b7dc-46cd-9e63-ec0ab10a8703", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"558b9cfcb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"whisker-558b9cfcb8-hltp7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali481d435eb1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:50.156996 containerd[1582]: 2026-01-28 00:02:50.087 [INFO][3832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.193/32] ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.156996 containerd[1582]: 2026-01-28 00:02:50.087 [INFO][3832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali481d435eb1f ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.156996 containerd[1582]: 2026-01-28 00:02:50.111 [INFO][3832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.156996 containerd[1582]: 2026-01-28 00:02:50.114 [INFO][3832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0", GenerateName:"whisker-558b9cfcb8-", Namespace:"calico-system", SelfLink:"", UID:"28c2487d-b7dc-46cd-9e63-ec0ab10a8703", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"558b9cfcb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646", Pod:"whisker-558b9cfcb8-hltp7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.117.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali481d435eb1f", MAC:"86:69:d9:66:60:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:50.157209 containerd[1582]: 2026-01-28 00:02:50.147 [INFO][3832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" Namespace="calico-system" Pod="whisker-558b9cfcb8-hltp7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-whisker--558b9cfcb8--hltp7-eth0" Jan 28 00:02:50.215017 containerd[1582]: time="2026-01-28T00:02:50.213681726Z" level=info msg="connecting to shim 6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646" address="unix:///run/containerd/s/a3cb2a8c08c2f9ed5a2fd797b17e959dc5909051908899b4877151bf6b3eafa4" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:50.278775 systemd[1]: Started cri-containerd-6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646.scope - libcontainer container 6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646. Jan 28 00:02:50.316000 audit: BPF prog-id=175 op=LOAD Jan 28 00:02:50.318000 audit: BPF prog-id=176 op=LOAD Jan 28 00:02:50.318000 audit[3960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c180 a2=98 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.318000 audit: BPF prog-id=176 op=UNLOAD Jan 28 00:02:50.318000 audit[3960]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.321000 audit: BPF prog-id=177 op=LOAD Jan 28 00:02:50.321000 audit[3960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c3e8 a2=98 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.321000 audit: BPF prog-id=178 op=LOAD Jan 28 00:02:50.321000 audit[3960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400020c168 a2=98 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.321000 audit: BPF prog-id=178 op=UNLOAD Jan 28 00:02:50.321000 audit[3960]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.321000 audit: BPF prog-id=177 op=UNLOAD Jan 28 00:02:50.321000 audit[3960]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.321000 audit: BPF prog-id=179 op=LOAD Jan 28 00:02:50.321000 audit[3960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c648 a2=98 a3=0 items=0 ppid=3948 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.321000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3665363437623539613262313063633133623739303837623630363438 Jan 28 00:02:50.456659 containerd[1582]: time="2026-01-28T00:02:50.456265511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b9cfcb8-hltp7,Uid:28c2487d-b7dc-46cd-9e63-ec0ab10a8703,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e647b59a2b10cc13b79087b60648483da6729fe2759a4d9311afc7cf0351646\"" Jan 28 00:02:50.466295 containerd[1582]: time="2026-01-28T00:02:50.465134841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:02:50.708000 audit: BPF prog-id=180 op=LOAD Jan 28 00:02:50.708000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3e15eb8 a2=98 a3=ffffd3e15ea8 items=0 ppid=3853 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:02:50.708000 audit: BPF prog-id=180 op=UNLOAD Jan 28 00:02:50.708000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd3e15e88 a3=0 items=0 ppid=3853 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:02:50.708000 audit: BPF prog-id=181 op=LOAD Jan 28 00:02:50.708000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3e15d68 a2=74 a3=95 items=0 ppid=3853 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:02:50.708000 audit: BPF prog-id=181 op=UNLOAD Jan 28 00:02:50.708000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3853 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:02:50.708000 audit: BPF prog-id=182 op=LOAD Jan 28 00:02:50.708000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd3e15d98 a2=40 a3=ffffd3e15dc8 items=0 ppid=3853 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:02:50.708000 audit: BPF prog-id=182 op=UNLOAD Jan 28 00:02:50.708000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffd3e15dc8 items=0 ppid=3853 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.708000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 28 00:02:50.711000 audit: BPF prog-id=183 op=LOAD Jan 28 00:02:50.711000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3f3e7f8 a2=98 a3=ffffc3f3e7e8 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.711000 audit: BPF prog-id=183 op=UNLOAD Jan 28 00:02:50.711000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc3f3e7c8 a3=0 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.711000 audit: BPF prog-id=184 op=LOAD Jan 28 00:02:50.711000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc3f3e488 a2=74 a3=95 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.711000 audit: BPF prog-id=184 op=UNLOAD Jan 28 00:02:50.711000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.711000 audit: BPF prog-id=185 op=LOAD Jan 28 00:02:50.711000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc3f3e4e8 a2=94 a3=2 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.711000 audit: BPF prog-id=185 op=UNLOAD Jan 28 00:02:50.711000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.711000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.809394 containerd[1582]: time="2026-01-28T00:02:50.809316899Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:50.810961 containerd[1582]: time="2026-01-28T00:02:50.810909140Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:02:50.811169 containerd[1582]: time="2026-01-28T00:02:50.811026146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:50.811677 kubelet[2747]: E0128 00:02:50.811289 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:02:50.811677 kubelet[2747]: E0128 00:02:50.811369 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:02:50.825461 kubelet[2747]: E0128 00:02:50.824380 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5bb067f95b524ace93b37ce99b4fd669,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:50.827744 containerd[1582]: time="2026-01-28T00:02:50.827696472Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:02:50.835000 audit: BPF prog-id=186 op=LOAD Jan 28 00:02:50.835000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc3f3e4a8 a2=40 a3=ffffc3f3e4d8 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.835000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.835000 audit: BPF prog-id=186 op=UNLOAD Jan 28 00:02:50.835000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc3f3e4d8 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.835000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.850000 audit: BPF prog-id=187 op=LOAD Jan 28 00:02:50.850000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc3f3e4b8 a2=94 a3=4 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.850000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.851000 audit: BPF prog-id=187 op=UNLOAD Jan 28 00:02:50.851000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.851000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.852000 audit: BPF prog-id=188 op=LOAD Jan 28 00:02:50.852000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc3f3e2f8 a2=94 a3=5 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.852000 audit: BPF prog-id=188 op=UNLOAD Jan 28 00:02:50.852000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.852000 audit: BPF prog-id=189 op=LOAD Jan 28 00:02:50.852000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc3f3e528 a2=94 a3=6 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.852000 audit: BPF prog-id=189 op=UNLOAD Jan 28 00:02:50.852000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.852000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.853000 audit: BPF prog-id=190 op=LOAD Jan 28 00:02:50.853000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc3f3dcf8 a2=94 a3=83 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.853000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.854000 audit: BPF prog-id=191 op=LOAD Jan 28 00:02:50.854000 audit[4047]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc3f3dab8 a2=94 a3=2 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.854000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.854000 audit: BPF prog-id=191 op=UNLOAD Jan 28 00:02:50.854000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.854000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.855000 audit: BPF prog-id=190 op=UNLOAD Jan 28 00:02:50.855000 audit[4047]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=f46620 a3=f39b00 items=0 ppid=3853 pid=4047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.855000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 28 00:02:50.878000 audit: BPF prog-id=192 op=LOAD Jan 28 00:02:50.878000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffde85e588 a2=98 a3=ffffde85e578 items=0 ppid=3853 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.878000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:02:50.879000 audit: BPF prog-id=192 op=UNLOAD Jan 28 00:02:50.879000 audit[4050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffde85e558 a3=0 items=0 ppid=3853 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.879000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:02:50.879000 audit: BPF prog-id=193 op=LOAD Jan 28 00:02:50.879000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffde85e438 a2=74 a3=95 items=0 ppid=3853 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.879000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:02:50.879000 audit: BPF prog-id=193 op=UNLOAD Jan 28 00:02:50.879000 audit[4050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3853 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.879000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:02:50.879000 audit: BPF prog-id=194 op=LOAD Jan 28 00:02:50.879000 audit[4050]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffde85e468 a2=40 a3=ffffde85e498 items=0 ppid=3853 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.879000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:02:50.879000 audit: BPF prog-id=194 op=UNLOAD Jan 28 00:02:50.879000 audit[4050]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffde85e498 items=0 ppid=3853 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:50.879000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 28 00:02:50.990838 systemd-networkd[1464]: vxlan.calico: Link UP Jan 28 00:02:50.990845 systemd-networkd[1464]: vxlan.calico: Gained carrier Jan 28 00:02:51.012000 audit: BPF prog-id=195 op=LOAD Jan 28 00:02:51.012000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffccf7bcd8 a2=98 a3=ffffccf7bcc8 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.012000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.013000 audit: BPF prog-id=195 op=UNLOAD Jan 28 00:02:51.013000 audit[4078]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffccf7bca8 a3=0 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.013000 audit: BPF prog-id=196 op=LOAD Jan 28 00:02:51.013000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffccf7b9b8 a2=74 a3=95 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.013000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=196 op=UNLOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=197 op=LOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffccf7ba18 a2=94 a3=2 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=197 op=UNLOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=198 op=LOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccf7b898 a2=40 a3=ffffccf7b8c8 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=198 op=UNLOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffccf7b8c8 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=199 op=LOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccf7b9e8 a2=94 a3=b7 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.014000 audit: BPF prog-id=199 op=UNLOAD Jan 28 00:02:51.014000 audit[4078]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.014000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.018000 audit: BPF prog-id=200 op=LOAD Jan 28 00:02:51.018000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccf7b098 a2=94 a3=2 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.018000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.018000 audit: BPF prog-id=200 op=UNLOAD Jan 28 00:02:51.018000 audit[4078]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.018000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.018000 audit: BPF prog-id=201 op=LOAD Jan 28 00:02:51.018000 audit[4078]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffccf7b228 a2=94 a3=30 items=0 ppid=3853 pid=4078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.018000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 28 00:02:51.029000 audit: BPF prog-id=202 op=LOAD Jan 28 00:02:51.029000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5063eb8 a2=98 a3=ffffc5063ea8 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.029000 audit: BPF prog-id=202 op=UNLOAD Jan 28 00:02:51.029000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc5063e88 a3=0 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.029000 audit: BPF prog-id=203 op=LOAD Jan 28 00:02:51.029000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5063b48 a2=74 a3=95 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.029000 audit: BPF prog-id=203 op=UNLOAD Jan 28 00:02:51.029000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.029000 audit: BPF prog-id=204 op=LOAD Jan 28 00:02:51.029000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5063ba8 a2=94 a3=2 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.029000 audit: BPF prog-id=204 op=UNLOAD Jan 28 00:02:51.029000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.029000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.146000 audit: BPF prog-id=205 op=LOAD Jan 28 00:02:51.146000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5063b68 a2=40 a3=ffffc5063b98 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.146000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.147000 audit: BPF prog-id=205 op=UNLOAD Jan 28 00:02:51.147000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc5063b98 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.147000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.158000 audit: BPF prog-id=206 op=LOAD Jan 28 00:02:51.158000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5063b78 a2=94 a3=4 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.158000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.159000 audit: BPF prog-id=206 op=UNLOAD Jan 28 00:02:51.159000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.159000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.159000 audit: BPF prog-id=207 op=LOAD Jan 28 00:02:51.159000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc50639b8 a2=94 a3=5 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.159000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.159000 audit: BPF prog-id=207 op=UNLOAD Jan 28 00:02:51.159000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.159000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.159000 audit: BPF prog-id=208 op=LOAD Jan 28 00:02:51.159000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc5063be8 a2=94 a3=6 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.159000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.160000 audit: BPF prog-id=208 op=UNLOAD Jan 28 00:02:51.160000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.160000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.161000 audit: BPF prog-id=209 op=LOAD Jan 28 00:02:51.161000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc50633b8 a2=94 a3=83 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.161000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.162000 audit: BPF prog-id=210 op=LOAD Jan 28 00:02:51.162000 audit[4082]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc5063178 a2=94 a3=2 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.162000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.162000 audit: BPF prog-id=210 op=UNLOAD Jan 28 00:02:51.162000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.162000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.162000 audit: BPF prog-id=209 op=UNLOAD Jan 28 00:02:51.162000 audit[4082]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=3c8a7620 a3=3c89ab00 items=0 ppid=3853 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.162000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 28 00:02:51.173714 containerd[1582]: time="2026-01-28T00:02:51.173516306Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:51.178312 containerd[1582]: time="2026-01-28T00:02:51.175455483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:02:51.178576 containerd[1582]: time="2026-01-28T00:02:51.178288625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:51.179585 kubelet[2747]: E0128 00:02:51.179528 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:02:51.179774 kubelet[2747]: E0128 00:02:51.179729 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:02:51.178000 audit: BPF prog-id=201 op=UNLOAD Jan 28 00:02:51.180802 kubelet[2747]: E0128 00:02:51.180003 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:51.178000 audit[3853]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000910680 a2=0 a3=0 items=0 ppid=3843 pid=3853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.178000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 28 00:02:51.181804 kubelet[2747]: E0128 00:02:51.181401 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:02:51.255000 audit[4107]: NETFILTER_CFG table=mangle:123 family=2 entries=16 op=nft_register_chain pid=4107 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:51.255000 audit[4107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc7a6f600 a2=0 a3=ffff9cff4fa8 items=0 ppid=3853 pid=4107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.258000 audit[4109]: NETFILTER_CFG table=nat:124 family=2 entries=15 op=nft_register_chain pid=4109 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:51.258000 audit[4109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc614a3f0 a2=0 a3=ffffb77e6fa8 items=0 ppid=3853 pid=4109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.258000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:51.255000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:51.258823 systemd-networkd[1464]: cali481d435eb1f: Gained IPv6LL Jan 28 00:02:51.268000 audit[4106]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4106 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:51.268000 audit[4106]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffd5d54b60 a2=0 a3=ffff8214cfa8 items=0 ppid=3853 pid=4106 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.268000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:51.291000 audit[4111]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4111 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:51.291000 audit[4111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=fffffb0e2390 a2=0 a3=ffff928d3fa8 items=0 ppid=3853 pid=4111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.291000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:51.359716 kubelet[2747]: E0128 00:02:51.359564 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:02:51.396000 audit[4119]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:51.396000 audit[4119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe8601500 a2=0 a3=1 items=0 ppid=2849 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:51.402000 audit[4119]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:51.402000 audit[4119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe8601500 a2=0 a3=1 items=0 ppid=2849 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:51.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:52.363901 kubelet[2747]: E0128 00:02:52.363817 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:02:52.986970 systemd-networkd[1464]: vxlan.calico: Gained IPv6LL Jan 28 00:02:54.109347 containerd[1582]: time="2026-01-28T00:02:54.108965319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6mgn,Uid:2955546a-cb98-4307-9f9a-44877b3e7017,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:54.310309 systemd-networkd[1464]: calid4f241f9ed7: Link UP Jan 28 00:02:54.312915 systemd-networkd[1464]: calid4f241f9ed7: Gained carrier Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.183 [INFO][4124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0 csi-node-driver- calico-system 2955546a-cb98-4307-9f9a-44877b3e7017 807 0 2026-01-28 00:02:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 csi-node-driver-w6mgn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid4f241f9ed7 [] [] }} ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.183 [INFO][4124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.224 [INFO][4136] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" HandleID="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Workload="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.224 [INFO][4136] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" HandleID="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Workload="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"csi-node-driver-w6mgn", "timestamp":"2026-01-28 00:02:54.224627163 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.224 [INFO][4136] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.225 [INFO][4136] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.225 [INFO][4136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.240 [INFO][4136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.254 [INFO][4136] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.264 [INFO][4136] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.268 [INFO][4136] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.273 [INFO][4136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.274 [INFO][4136] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.277 [INFO][4136] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055 Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.287 [INFO][4136] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.298 [INFO][4136] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.194/26] block=192.168.117.192/26 handle="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.298 [INFO][4136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.194/26] handle="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.299 [INFO][4136] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:54.335001 containerd[1582]: 2026-01-28 00:02:54.299 [INFO][4136] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.194/26] IPv6=[] ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" HandleID="k8s-pod-network.700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Workload="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.337757 containerd[1582]: 2026-01-28 00:02:54.303 [INFO][4124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2955546a-cb98-4307-9f9a-44877b3e7017", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"csi-node-driver-w6mgn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid4f241f9ed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:54.337757 containerd[1582]: 2026-01-28 00:02:54.303 [INFO][4124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.194/32] ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.337757 containerd[1582]: 2026-01-28 00:02:54.303 [INFO][4124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4f241f9ed7 ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.337757 containerd[1582]: 2026-01-28 00:02:54.308 [INFO][4124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.337757 containerd[1582]: 2026-01-28 00:02:54.310 [INFO][4124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2955546a-cb98-4307-9f9a-44877b3e7017", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055", Pod:"csi-node-driver-w6mgn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.117.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid4f241f9ed7", MAC:"16:63:b9:06:97:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:54.337757 containerd[1582]: 2026-01-28 00:02:54.326 [INFO][4124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" Namespace="calico-system" Pod="csi-node-driver-w6mgn" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-csi--node--driver--w6mgn-eth0" Jan 28 00:02:54.401000 audit[4151]: NETFILTER_CFG table=filter:129 family=2 entries=36 op=nft_register_chain pid=4151 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:54.405621 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 28 00:02:54.405792 kernel: audit: type=1325 audit(1769558574.401:653): table=filter:129 family=2 entries=36 op=nft_register_chain pid=4151 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:54.407463 containerd[1582]: time="2026-01-28T00:02:54.407317095Z" level=info msg="connecting to shim 700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055" address="unix:///run/containerd/s/6ab2fb112c57a39f638241d0ee8ee1f28f61051cf9f9e81c288a35c8d0430334" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:54.401000 audit[4151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffc1076c70 a2=0 a3=ffff8233dfa8 items=0 ppid=3853 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.411133 kernel: audit: type=1300 audit(1769558574.401:653): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffc1076c70 a2=0 a3=ffff8233dfa8 items=0 ppid=3853 pid=4151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.401000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:54.414998 kernel: audit: type=1327 audit(1769558574.401:653): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:54.464237 systemd[1]: Started cri-containerd-700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055.scope - libcontainer container 700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055. Jan 28 00:02:54.493000 audit: BPF prog-id=211 op=LOAD Jan 28 00:02:54.496638 kernel: audit: type=1334 audit(1769558574.493:654): prog-id=211 op=LOAD Jan 28 00:02:54.495000 audit: BPF prog-id=212 op=LOAD Jan 28 00:02:54.495000 audit[4173]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.503543 kernel: audit: type=1334 audit(1769558574.495:655): prog-id=212 op=LOAD Jan 28 00:02:54.503887 kernel: audit: type=1300 audit(1769558574.495:655): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.504013 kernel: audit: type=1327 audit(1769558574.495:655): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.495000 audit: BPF prog-id=212 op=UNLOAD Jan 28 00:02:54.495000 audit[4173]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.509708 kernel: audit: type=1334 audit(1769558574.495:656): prog-id=212 op=UNLOAD Jan 28 00:02:54.510102 kernel: audit: type=1300 audit(1769558574.495:656): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.515067 kernel: audit: type=1327 audit(1769558574.495:656): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.495000 audit: BPF prog-id=213 op=LOAD Jan 28 00:02:54.495000 audit[4173]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.495000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.496000 audit: BPF prog-id=214 op=LOAD Jan 28 00:02:54.496000 audit[4173]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.496000 audit: BPF prog-id=214 op=UNLOAD Jan 28 00:02:54.496000 audit[4173]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.496000 audit: BPF prog-id=213 op=UNLOAD Jan 28 00:02:54.496000 audit[4173]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.496000 audit: BPF prog-id=215 op=LOAD Jan 28 00:02:54.496000 audit[4173]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4162 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:54.496000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3730306133333232326233323661616530633136666639373864653432 Jan 28 00:02:54.562788 containerd[1582]: time="2026-01-28T00:02:54.562408569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w6mgn,Uid:2955546a-cb98-4307-9f9a-44877b3e7017,Namespace:calico-system,Attempt:0,} returns sandbox id \"700a33222b326aae0c16ff978de423b9e7ef4d1f519fb6686ac94e6dd6265055\"" Jan 28 00:02:54.565494 containerd[1582]: time="2026-01-28T00:02:54.565352992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:02:54.913841 containerd[1582]: time="2026-01-28T00:02:54.913547222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:54.916081 containerd[1582]: time="2026-01-28T00:02:54.916015582Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:02:54.916382 containerd[1582]: time="2026-01-28T00:02:54.916263554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:54.919360 kubelet[2747]: E0128 00:02:54.916798 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:02:54.919360 kubelet[2747]: E0128 00:02:54.916857 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:02:54.919360 kubelet[2747]: E0128 00:02:54.917020 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:54.927610 containerd[1582]: time="2026-01-28T00:02:54.927521060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:02:55.297700 containerd[1582]: time="2026-01-28T00:02:55.297375397Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:55.299424 containerd[1582]: time="2026-01-28T00:02:55.299331771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:02:55.299650 containerd[1582]: time="2026-01-28T00:02:55.299540261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:55.300649 kubelet[2747]: E0128 00:02:55.300047 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:02:55.300649 kubelet[2747]: E0128 00:02:55.300102 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:02:55.300649 kubelet[2747]: E0128 00:02:55.300261 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:55.301938 kubelet[2747]: E0128 00:02:55.301661 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:55.386836 kubelet[2747]: E0128 00:02:55.381553 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:56.110286 containerd[1582]: time="2026-01-28T00:02:56.110162577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ckpk7,Uid:c920f575-603c-483a-9607-ef10e4a56793,Namespace:kube-system,Attempt:0,}" Jan 28 00:02:56.123924 systemd-networkd[1464]: calid4f241f9ed7: Gained IPv6LL Jan 28 00:02:56.331431 systemd-networkd[1464]: calide5cfc82709: Link UP Jan 28 00:02:56.332965 systemd-networkd[1464]: calide5cfc82709: Gained carrier Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.212 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0 coredns-668d6bf9bc- kube-system c920f575-603c-483a-9607-ef10e4a56793 858 0 2026-01-28 00:02:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 coredns-668d6bf9bc-ckpk7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calide5cfc82709 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.212 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.256 [INFO][4211] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" HandleID="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Workload="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.256 [INFO][4211] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" HandleID="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Workload="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"coredns-668d6bf9bc-ckpk7", "timestamp":"2026-01-28 00:02:56.256410327 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.256 [INFO][4211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.256 [INFO][4211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.256 [INFO][4211] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.273 [INFO][4211] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.282 [INFO][4211] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.290 [INFO][4211] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.295 [INFO][4211] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.300 [INFO][4211] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.300 [INFO][4211] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.303 [INFO][4211] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.309 [INFO][4211] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.320 [INFO][4211] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.195/26] block=192.168.117.192/26 handle="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.320 [INFO][4211] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.195/26] handle="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.320 [INFO][4211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:56.359183 containerd[1582]: 2026-01-28 00:02:56.320 [INFO][4211] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.195/26] IPv6=[] ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" HandleID="k8s-pod-network.ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Workload="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.361610 containerd[1582]: 2026-01-28 00:02:56.323 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c920f575-603c-483a-9607-ef10e4a56793", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"coredns-668d6bf9bc-ckpk7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide5cfc82709", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:56.361610 containerd[1582]: 2026-01-28 00:02:56.324 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.195/32] ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.361610 containerd[1582]: 2026-01-28 00:02:56.325 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide5cfc82709 ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.361610 containerd[1582]: 2026-01-28 00:02:56.333 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.362493 containerd[1582]: 2026-01-28 00:02:56.333 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c920f575-603c-483a-9607-ef10e4a56793", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad", Pod:"coredns-668d6bf9bc-ckpk7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calide5cfc82709", MAC:"4a:b9:4b:c0:d7:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:56.362493 containerd[1582]: 2026-01-28 00:02:56.356 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" Namespace="kube-system" Pod="coredns-668d6bf9bc-ckpk7" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--ckpk7-eth0" Jan 28 00:02:56.386944 kubelet[2747]: E0128 00:02:56.386874 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:02:56.423437 containerd[1582]: time="2026-01-28T00:02:56.423102568Z" level=info msg="connecting to shim ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad" address="unix:///run/containerd/s/9c7aee11cfa5cc37fc9496114c378df0ab024575be64dfbfa066868acb02f133" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:56.444000 audit[4252]: NETFILTER_CFG table=filter:130 family=2 entries=52 op=nft_register_chain pid=4252 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:56.444000 audit[4252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26592 a0=3 a1=ffffcf353cc0 a2=0 a3=ffffbcdcbfa8 items=0 ppid=3853 pid=4252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.444000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:56.470931 systemd[1]: Started cri-containerd-ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad.scope - libcontainer container ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad. Jan 28 00:02:56.482000 audit: BPF prog-id=216 op=LOAD Jan 28 00:02:56.483000 audit: BPF prog-id=217 op=LOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.483000 audit: BPF prog-id=217 op=UNLOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.483000 audit: BPF prog-id=218 op=LOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.483000 audit: BPF prog-id=219 op=LOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.483000 audit: BPF prog-id=219 op=UNLOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.483000 audit: BPF prog-id=218 op=UNLOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.483000 audit: BPF prog-id=220 op=LOAD Jan 28 00:02:56.483000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4244 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.483000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6561353032623766646439613538353337373633316662336335356332 Jan 28 00:02:56.521390 containerd[1582]: time="2026-01-28T00:02:56.521348556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ckpk7,Uid:c920f575-603c-483a-9607-ef10e4a56793,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad\"" Jan 28 00:02:56.526002 containerd[1582]: time="2026-01-28T00:02:56.525940494Z" level=info msg="CreateContainer within sandbox \"ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:02:56.546015 containerd[1582]: time="2026-01-28T00:02:56.544845472Z" level=info msg="Container 87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:56.549924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1309774742.mount: Deactivated successfully. Jan 28 00:02:56.556366 containerd[1582]: time="2026-01-28T00:02:56.556295497Z" level=info msg="CreateContainer within sandbox \"ea502b7fdd9a585377631fb3c55c224dce5dd2395ac8afda365f9364df34a1ad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14\"" Jan 28 00:02:56.557622 containerd[1582]: time="2026-01-28T00:02:56.557292304Z" level=info msg="StartContainer for \"87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14\"" Jan 28 00:02:56.561521 containerd[1582]: time="2026-01-28T00:02:56.561381618Z" level=info msg="connecting to shim 87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14" address="unix:///run/containerd/s/9c7aee11cfa5cc37fc9496114c378df0ab024575be64dfbfa066868acb02f133" protocol=ttrpc version=3 Jan 28 00:02:56.589006 systemd[1]: Started cri-containerd-87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14.scope - libcontainer container 87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14. Jan 28 00:02:56.602000 audit: BPF prog-id=221 op=LOAD Jan 28 00:02:56.603000 audit: BPF prog-id=222 op=LOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.603000 audit: BPF prog-id=222 op=UNLOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.603000 audit: BPF prog-id=223 op=LOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.603000 audit: BPF prog-id=224 op=LOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.603000 audit: BPF prog-id=224 op=UNLOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.603000 audit: BPF prog-id=223 op=UNLOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.603000 audit: BPF prog-id=225 op=LOAD Jan 28 00:02:56.603000 audit[4281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4244 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:56.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3837616662353862656432616634396332633831316530363762383335 Jan 28 00:02:56.633012 containerd[1582]: time="2026-01-28T00:02:56.632373872Z" level=info msg="StartContainer for \"87afb58bed2af49c2c811e067b835a6a20ee6279faeb42cf07bb06b263372b14\" returns successfully" Jan 28 00:02:57.109450 containerd[1582]: time="2026-01-28T00:02:57.108925071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-7vcbz,Uid:07cd90a0-de7e-4c03-9b09-e4adb1ab3e71,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:02:57.110180 containerd[1582]: time="2026-01-28T00:02:57.110050404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqlk4,Uid:075de7ad-4943-4b28-b59b-6c8de5997c15,Namespace:kube-system,Attempt:0,}" Jan 28 00:02:57.113423 containerd[1582]: time="2026-01-28T00:02:57.113333679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-5lsmb,Uid:137246fc-f131-4552-a311-34e5752765be,Namespace:calico-apiserver,Attempt:0,}" Jan 28 00:02:57.379708 systemd-networkd[1464]: calid618a0e7388: Link UP Jan 28 00:02:57.383049 systemd-networkd[1464]: calid618a0e7388: Gained carrier Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.221 [INFO][4324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0 calico-apiserver-697c7bd8db- calico-apiserver 137246fc-f131-4552-a311-34e5752765be 860 0 2026-01-28 00:02:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:697c7bd8db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 calico-apiserver-697c7bd8db-5lsmb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid618a0e7388 [] [] }} ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.221 [INFO][4324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.281 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" HandleID="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.281 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" HandleID="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b6c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"calico-apiserver-697c7bd8db-5lsmb", "timestamp":"2026-01-28 00:02:57.281235746 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.281 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.281 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.281 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.297 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.311 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.326 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.329 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.336 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.336 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.343 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9 Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.356 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.368 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.196/26] block=192.168.117.192/26 handle="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.368 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.196/26] handle="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.425754 containerd[1582]: 2026-01-28 00:02:57.368 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:57.427981 containerd[1582]: 2026-01-28 00:02:57.368 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.196/26] IPv6=[] ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" HandleID="k8s-pod-network.0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.427981 containerd[1582]: 2026-01-28 00:02:57.372 [INFO][4324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0", GenerateName:"calico-apiserver-697c7bd8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"137246fc-f131-4552-a311-34e5752765be", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"697c7bd8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"calico-apiserver-697c7bd8db-5lsmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid618a0e7388", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:57.427981 containerd[1582]: 2026-01-28 00:02:57.372 [INFO][4324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.196/32] ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.427981 containerd[1582]: 2026-01-28 00:02:57.373 [INFO][4324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid618a0e7388 ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.427981 containerd[1582]: 2026-01-28 00:02:57.383 [INFO][4324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.428149 containerd[1582]: 2026-01-28 00:02:57.387 [INFO][4324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0", GenerateName:"calico-apiserver-697c7bd8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"137246fc-f131-4552-a311-34e5752765be", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"697c7bd8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9", Pod:"calico-apiserver-697c7bd8db-5lsmb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid618a0e7388", MAC:"5e:8b:a3:e5:e5:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:57.428149 containerd[1582]: 2026-01-28 00:02:57.417 [INFO][4324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-5lsmb" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--5lsmb-eth0" Jan 28 00:02:57.452010 kubelet[2747]: I0128 00:02:57.451932 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ckpk7" podStartSLOduration=51.451912184 podStartE2EDuration="51.451912184s" podCreationTimestamp="2026-01-28 00:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:02:57.427055053 +0000 UTC m=+57.453937972" watchObservedRunningTime="2026-01-28 00:02:57.451912184 +0000 UTC m=+57.478795103" Jan 28 00:02:57.490904 containerd[1582]: time="2026-01-28T00:02:57.490831216Z" level=info msg="connecting to shim 0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9" address="unix:///run/containerd/s/d6b945e58cd0d4e00153caef5a934e057f8bb312d802386b5856b4618fe048b6" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:57.509000 audit[4399]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:57.509000 audit[4399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffcb225e60 a2=0 a3=1 items=0 ppid=2849 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.509000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:57.514000 audit[4399]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4399 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:57.514000 audit[4399]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcb225e60 a2=0 a3=1 items=0 ppid=2849 pid=4399 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.514000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:57.522223 systemd-networkd[1464]: cali9fb81c0271c: Link UP Jan 28 00:02:57.522461 systemd-networkd[1464]: cali9fb81c0271c: Gained carrier Jan 28 00:02:57.560969 systemd[1]: Started cri-containerd-0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9.scope - libcontainer container 0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9. Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.300 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0 coredns-668d6bf9bc- kube-system 075de7ad-4943-4b28-b59b-6c8de5997c15 849 0 2026-01-28 00:02:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 coredns-668d6bf9bc-hqlk4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9fb81c0271c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.300 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.342 [INFO][4369] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" HandleID="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Workload="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.343 [INFO][4369] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" HandleID="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Workload="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"coredns-668d6bf9bc-hqlk4", "timestamp":"2026-01-28 00:02:57.342671399 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.343 [INFO][4369] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.368 [INFO][4369] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.369 [INFO][4369] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.403 [INFO][4369] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.430 [INFO][4369] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.448 [INFO][4369] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.456 [INFO][4369] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.464 [INFO][4369] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.464 [INFO][4369] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.474 [INFO][4369] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.485 [INFO][4369] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.501 [INFO][4369] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.197/26] block=192.168.117.192/26 handle="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.502 [INFO][4369] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.197/26] handle="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.502 [INFO][4369] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:57.568701 containerd[1582]: 2026-01-28 00:02:57.502 [INFO][4369] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.197/26] IPv6=[] ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" HandleID="k8s-pod-network.fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Workload="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.569321 containerd[1582]: 2026-01-28 00:02:57.514 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"075de7ad-4943-4b28-b59b-6c8de5997c15", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"coredns-668d6bf9bc-hqlk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fb81c0271c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:57.569321 containerd[1582]: 2026-01-28 00:02:57.515 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.197/32] ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.569321 containerd[1582]: 2026-01-28 00:02:57.515 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fb81c0271c ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.569321 containerd[1582]: 2026-01-28 00:02:57.523 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.569454 containerd[1582]: 2026-01-28 00:02:57.528 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"075de7ad-4943-4b28-b59b-6c8de5997c15", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c", Pod:"coredns-668d6bf9bc-hqlk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.117.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9fb81c0271c", MAC:"ca:f0:43:c5:9f:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:57.569454 containerd[1582]: 2026-01-28 00:02:57.559 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqlk4" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-coredns--668d6bf9bc--hqlk4-eth0" Jan 28 00:02:57.580000 audit[4428]: NETFILTER_CFG table=filter:133 family=2 entries=17 op=nft_register_rule pid=4428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:57.580000 audit[4428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdaa798e0 a2=0 a3=1 items=0 ppid=2849 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.580000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:57.584000 audit[4428]: NETFILTER_CFG table=nat:134 family=2 entries=35 op=nft_register_chain pid=4428 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:57.584000 audit[4428]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdaa798e0 a2=0 a3=1 items=0 ppid=2849 pid=4428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.584000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:57.602000 audit[4434]: NETFILTER_CFG table=filter:135 family=2 entries=54 op=nft_register_chain pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:57.602000 audit[4434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29416 a0=3 a1=ffffd7d960c0 a2=0 a3=ffff97bdffa8 items=0 ppid=3853 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.602000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:57.613000 audit: BPF prog-id=226 op=LOAD Jan 28 00:02:57.614000 audit: BPF prog-id=227 op=LOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.614000 audit: BPF prog-id=227 op=UNLOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.614000 audit: BPF prog-id=228 op=LOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.614000 audit: BPF prog-id=229 op=LOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.614000 audit: BPF prog-id=229 op=UNLOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.614000 audit: BPF prog-id=228 op=UNLOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.614000 audit: BPF prog-id=230 op=LOAD Jan 28 00:02:57.614000 audit[4408]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4394 pid=4408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.614000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064333962353233376237643631363762383432666566313830633161 Jan 28 00:02:57.619455 containerd[1582]: time="2026-01-28T00:02:57.619289826Z" level=info msg="connecting to shim fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c" address="unix:///run/containerd/s/a5d12e69150fbca5591a46b802199b82cc6d4dd83eaca4b9f1fafefe3615bb0e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:57.641738 systemd-networkd[1464]: caliba7f4c1eb4e: Link UP Jan 28 00:02:57.646156 systemd-networkd[1464]: caliba7f4c1eb4e: Gained carrier Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.276 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0 calico-apiserver-697c7bd8db- calico-apiserver 07cd90a0-de7e-4c03-9b09-e4adb1ab3e71 863 0 2026-01-28 00:02:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:697c7bd8db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 calico-apiserver-697c7bd8db-7vcbz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba7f4c1eb4e [] [] }} ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.277 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.365 [INFO][4364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" HandleID="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.365 [INFO][4364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" HandleID="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"calico-apiserver-697c7bd8db-7vcbz", "timestamp":"2026-01-28 00:02:57.36519402 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.365 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.503 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.503 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.536 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.565 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.578 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.585 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.589 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.590 [INFO][4364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.597 [INFO][4364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.607 [INFO][4364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.623 [INFO][4364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.198/26] block=192.168.117.192/26 handle="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.624 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.198/26] handle="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:57.679662 containerd[1582]: 2026-01-28 00:02:57.624 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:57.680212 containerd[1582]: 2026-01-28 00:02:57.624 [INFO][4364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.198/26] IPv6=[] ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" HandleID="k8s-pod-network.af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.680212 containerd[1582]: 2026-01-28 00:02:57.629 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0", GenerateName:"calico-apiserver-697c7bd8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"07cd90a0-de7e-4c03-9b09-e4adb1ab3e71", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"697c7bd8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"calico-apiserver-697c7bd8db-7vcbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba7f4c1eb4e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:57.680212 containerd[1582]: 2026-01-28 00:02:57.630 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.198/32] ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.680212 containerd[1582]: 2026-01-28 00:02:57.630 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba7f4c1eb4e ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.680212 containerd[1582]: 2026-01-28 00:02:57.643 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.680353 containerd[1582]: 2026-01-28 00:02:57.644 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0", GenerateName:"calico-apiserver-697c7bd8db-", Namespace:"calico-apiserver", SelfLink:"", UID:"07cd90a0-de7e-4c03-9b09-e4adb1ab3e71", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"697c7bd8db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e", Pod:"calico-apiserver-697c7bd8db-7vcbz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.117.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba7f4c1eb4e", MAC:"ea:6b:f0:9b:0e:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:57.680353 containerd[1582]: 2026-01-28 00:02:57.675 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" Namespace="calico-apiserver" Pod="calico-apiserver-697c7bd8db-7vcbz" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--apiserver--697c7bd8db--7vcbz-eth0" Jan 28 00:02:57.694334 systemd[1]: Started cri-containerd-fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c.scope - libcontainer container fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c. Jan 28 00:02:57.693000 audit[4475]: NETFILTER_CFG table=filter:136 family=2 entries=36 op=nft_register_chain pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:57.693000 audit[4475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19140 a0=3 a1=fffff9948e20 a2=0 a3=ffffbc20efa8 items=0 ppid=3853 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.693000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:57.729216 containerd[1582]: time="2026-01-28T00:02:57.728617175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-5lsmb,Uid:137246fc-f131-4552-a311-34e5752765be,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0d39b5237b7d6167b842fef180c1ac56b571daa96f5b566285942c4b8228f8a9\"" Jan 28 00:02:57.731482 containerd[1582]: time="2026-01-28T00:02:57.731435667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:02:57.742000 audit: BPF prog-id=231 op=LOAD Jan 28 00:02:57.744000 audit: BPF prog-id=232 op=LOAD Jan 28 00:02:57.744000 audit[4458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.744000 audit: BPF prog-id=232 op=UNLOAD Jan 28 00:02:57.744000 audit[4458]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.744000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.745000 audit: BPF prog-id=233 op=LOAD Jan 28 00:02:57.745000 audit[4458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.745000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.746000 audit: BPF prog-id=234 op=LOAD Jan 28 00:02:57.746000 audit[4458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.746000 audit: BPF prog-id=234 op=UNLOAD Jan 28 00:02:57.746000 audit[4458]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.747000 audit: BPF prog-id=233 op=UNLOAD Jan 28 00:02:57.747000 audit[4458]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.748000 audit: BPF prog-id=235 op=LOAD Jan 28 00:02:57.748000 audit[4458]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4445 pid=4458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662666632373265303564343961643033343362313733663765353364 Jan 28 00:02:57.756308 containerd[1582]: time="2026-01-28T00:02:57.756157712Z" level=info msg="connecting to shim af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e" address="unix:///run/containerd/s/05b2620925885027b890705fc7f49cd56ed363babf9174016009581e88552e9e" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:57.814339 systemd[1]: Started cri-containerd-af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e.scope - libcontainer container af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e. Jan 28 00:02:57.821003 containerd[1582]: time="2026-01-28T00:02:57.820963884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqlk4,Uid:075de7ad-4943-4b28-b59b-6c8de5997c15,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c\"" Jan 28 00:02:57.825762 containerd[1582]: time="2026-01-28T00:02:57.825722588Z" level=info msg="CreateContainer within sandbox \"fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 00:02:57.840386 containerd[1582]: time="2026-01-28T00:02:57.840341636Z" level=info msg="Container 89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:02:57.853178 containerd[1582]: time="2026-01-28T00:02:57.853132239Z" level=info msg="CreateContainer within sandbox \"fbff272e05d49ad0343b173f7e53da6465ef87a73feebde50c35d75961b69f8c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920\"" Jan 28 00:02:57.860155 containerd[1582]: time="2026-01-28T00:02:57.857965986Z" level=info msg="StartContainer for \"89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920\"" Jan 28 00:02:57.860808 containerd[1582]: time="2026-01-28T00:02:57.860769798Z" level=info msg="connecting to shim 89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920" address="unix:///run/containerd/s/a5d12e69150fbca5591a46b802199b82cc6d4dd83eaca4b9f1fafefe3615bb0e" protocol=ttrpc version=3 Jan 28 00:02:57.881000 audit: BPF prog-id=236 op=LOAD Jan 28 00:02:57.882000 audit: BPF prog-id=237 op=LOAD Jan 28 00:02:57.882000 audit[4513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.883000 audit: BPF prog-id=237 op=UNLOAD Jan 28 00:02:57.883000 audit[4513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.883000 audit: BPF prog-id=238 op=LOAD Jan 28 00:02:57.883000 audit[4513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.884000 audit: BPF prog-id=239 op=LOAD Jan 28 00:02:57.884000 audit[4513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.884000 audit: BPF prog-id=239 op=UNLOAD Jan 28 00:02:57.884000 audit[4513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.884000 audit: BPF prog-id=238 op=UNLOAD Jan 28 00:02:57.884000 audit[4513]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.884000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.885000 audit: BPF prog-id=240 op=LOAD Jan 28 00:02:57.885000 audit[4513]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4502 pid=4513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.885000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166323764356161666531363239313631326634633037313735643135 Jan 28 00:02:57.888000 audit[4546]: NETFILTER_CFG table=filter:137 family=2 entries=45 op=nft_register_chain pid=4546 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:57.888000 audit[4546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24248 a0=3 a1=fffff45e9d70 a2=0 a3=ffffa8930fa8 items=0 ppid=3853 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.888000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:57.902985 systemd[1]: Started cri-containerd-89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920.scope - libcontainer container 89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920. Jan 28 00:02:57.927000 audit: BPF prog-id=241 op=LOAD Jan 28 00:02:57.928000 audit: BPF prog-id=242 op=LOAD Jan 28 00:02:57.928000 audit[4540]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.929000 audit: BPF prog-id=242 op=UNLOAD Jan 28 00:02:57.929000 audit[4540]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.929000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.930000 audit: BPF prog-id=243 op=LOAD Jan 28 00:02:57.930000 audit[4540]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.930000 audit: BPF prog-id=244 op=LOAD Jan 28 00:02:57.930000 audit[4540]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.931000 audit: BPF prog-id=244 op=UNLOAD Jan 28 00:02:57.931000 audit[4540]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.931000 audit: BPF prog-id=243 op=UNLOAD Jan 28 00:02:57.931000 audit[4540]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.931000 audit: BPF prog-id=245 op=LOAD Jan 28 00:02:57.931000 audit[4540]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4445 pid=4540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:57.931000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3839663333663563383634653733343632363664656336666632663239 Jan 28 00:02:57.978795 systemd-networkd[1464]: calide5cfc82709: Gained IPv6LL Jan 28 00:02:57.981360 containerd[1582]: time="2026-01-28T00:02:57.981221671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-697c7bd8db-7vcbz,Uid:07cd90a0-de7e-4c03-9b09-e4adb1ab3e71,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"af27d5aafe16291612f4c07175d1506a23284a7416517deac0d85f5d4bfcff6e\"" Jan 28 00:02:58.014903 containerd[1582]: time="2026-01-28T00:02:58.014850729Z" level=info msg="StartContainer for \"89f33f5c864e7346266dec6ff2f2922ece2eba8a906cc4c9cae5b80e52686920\" returns successfully" Jan 28 00:02:58.080862 containerd[1582]: time="2026-01-28T00:02:58.080806329Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:58.082854 containerd[1582]: time="2026-01-28T00:02:58.082782701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:58.083010 containerd[1582]: time="2026-01-28T00:02:58.082856224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:02:58.083449 kubelet[2747]: E0128 00:02:58.083394 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:02:58.083919 kubelet[2747]: E0128 00:02:58.083645 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:02:58.084656 kubelet[2747]: E0128 00:02:58.084482 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r599k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-5lsmb_calico-apiserver(137246fc-f131-4552-a311-34e5752765be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:58.085784 containerd[1582]: time="2026-01-28T00:02:58.084587945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:02:58.086819 kubelet[2747]: E0128 00:02:58.086758 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:02:58.112337 containerd[1582]: time="2026-01-28T00:02:58.112030667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h6h5c,Uid:bec7677b-1e96-4176-929f-6c7596f75411,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:58.112337 containerd[1582]: time="2026-01-28T00:02:58.112307360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74bd894b84-mmqcv,Uid:1d007c43-f99e-43db-8dd1-f5d56f04a788,Namespace:calico-system,Attempt:0,}" Jan 28 00:02:58.314010 systemd-networkd[1464]: calif2c32e2c532: Link UP Jan 28 00:02:58.315316 systemd-networkd[1464]: calif2c32e2c532: Gained carrier Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.202 [INFO][4580] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0 goldmane-666569f655- calico-system bec7677b-1e96-4176-929f-6c7596f75411 859 0 2026-01-28 00:02:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 goldmane-666569f655-h6h5c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif2c32e2c532 [] [] }} ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.202 [INFO][4580] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4605] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" HandleID="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Workload="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4605] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" HandleID="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Workload="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"goldmane-666569f655-h6h5c", "timestamp":"2026-01-28 00:02:58.242062619 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4605] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4605] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.254 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.262 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.269 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.272 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.276 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.279 [INFO][4605] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.282 [INFO][4605] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408 Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.288 [INFO][4605] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.298 [INFO][4605] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.199/26] block=192.168.117.192/26 handle="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.299 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.199/26] handle="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.299 [INFO][4605] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:58.341143 containerd[1582]: 2026-01-28 00:02:58.299 [INFO][4605] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.199/26] IPv6=[] ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" HandleID="k8s-pod-network.6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Workload="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.342809 containerd[1582]: 2026-01-28 00:02:58.304 [INFO][4580] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bec7677b-1e96-4176-929f-6c7596f75411", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"goldmane-666569f655-h6h5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2c32e2c532", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:58.342809 containerd[1582]: 2026-01-28 00:02:58.304 [INFO][4580] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.199/32] ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.342809 containerd[1582]: 2026-01-28 00:02:58.304 [INFO][4580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2c32e2c532 ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.342809 containerd[1582]: 2026-01-28 00:02:58.316 [INFO][4580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.342809 containerd[1582]: 2026-01-28 00:02:58.316 [INFO][4580] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"bec7677b-1e96-4176-929f-6c7596f75411", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408", Pod:"goldmane-666569f655-h6h5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.117.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif2c32e2c532", MAC:"52:50:49:8b:e3:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:58.342809 containerd[1582]: 2026-01-28 00:02:58.334 [INFO][4580] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" Namespace="calico-system" Pod="goldmane-666569f655-h6h5c" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-goldmane--666569f655--h6h5c-eth0" Jan 28 00:02:58.379037 containerd[1582]: time="2026-01-28T00:02:58.378987693Z" level=info msg="connecting to shim 6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408" address="unix:///run/containerd/s/bcdddf28c2858df27b68c3928a92ba973196401a044609fdea54c9ae0f9d5e69" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:58.383000 audit[4638]: NETFILTER_CFG table=filter:138 family=2 entries=56 op=nft_register_chain pid=4638 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:58.383000 audit[4638]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28728 a0=3 a1=ffffcc827660 a2=0 a3=ffffb7046fa8 items=0 ppid=3853 pid=4638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.383000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:58.413100 kubelet[2747]: E0128 00:02:58.412574 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:02:58.431694 containerd[1582]: time="2026-01-28T00:02:58.431461063Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:58.437026 containerd[1582]: time="2026-01-28T00:02:58.434079905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:02:58.437026 containerd[1582]: time="2026-01-28T00:02:58.434194391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:58.437209 kubelet[2747]: E0128 00:02:58.435248 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:02:58.437209 kubelet[2747]: E0128 00:02:58.435303 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:02:58.437209 kubelet[2747]: E0128 00:02:58.435467 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l67bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-7vcbz_calico-apiserver(07cd90a0-de7e-4c03-9b09-e4adb1ab3e71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:58.437209 kubelet[2747]: E0128 00:02:58.436719 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:02:58.444283 systemd[1]: Started cri-containerd-6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408.scope - libcontainer container 6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408. Jan 28 00:02:58.453282 kubelet[2747]: I0128 00:02:58.452729 2747 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hqlk4" podStartSLOduration=52.452707615 podStartE2EDuration="52.452707615s" podCreationTimestamp="2026-01-28 00:02:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:02:58.452182711 +0000 UTC m=+58.479065630" watchObservedRunningTime="2026-01-28 00:02:58.452707615 +0000 UTC m=+58.479590534" Jan 28 00:02:58.473894 systemd-networkd[1464]: calie4afec9ff63: Link UP Jan 28 00:02:58.479468 systemd-networkd[1464]: calie4afec9ff63: Gained carrier Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.202 [INFO][4586] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0 calico-kube-controllers-74bd894b84- calico-system 1d007c43-f99e-43db-8dd1-f5d56f04a788 857 0 2026-01-28 00:02:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74bd894b84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4593-0-0-n-20383d5ef7 calico-kube-controllers-74bd894b84-mmqcv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie4afec9ff63 [] [] }} ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.202 [INFO][4586] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4603] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" HandleID="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4603] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" HandleID="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4593-0-0-n-20383d5ef7", "pod":"calico-kube-controllers-74bd894b84-mmqcv", "timestamp":"2026-01-28 00:02:58.242182704 +0000 UTC"}, Hostname:"ci-4593-0-0-n-20383d5ef7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.242 [INFO][4603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.299 [INFO][4603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.300 [INFO][4603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4593-0-0-n-20383d5ef7' Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.355 [INFO][4603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.371 [INFO][4603] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.382 [INFO][4603] ipam/ipam.go 511: Trying affinity for 192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.393 [INFO][4603] ipam/ipam.go 158: Attempting to load block cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.400 [INFO][4603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.117.192/26 host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.400 [INFO][4603] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.117.192/26 handle="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.405 [INFO][4603] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.419 [INFO][4603] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.117.192/26 handle="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.447 [INFO][4603] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.117.200/26] block=192.168.117.192/26 handle="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.447 [INFO][4603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.117.200/26] handle="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" host="ci-4593-0-0-n-20383d5ef7" Jan 28 00:02:58.502189 containerd[1582]: 2026-01-28 00:02:58.447 [INFO][4603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 00:02:58.504284 containerd[1582]: 2026-01-28 00:02:58.448 [INFO][4603] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.117.200/26] IPv6=[] ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" HandleID="k8s-pod-network.3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Workload="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.504284 containerd[1582]: 2026-01-28 00:02:58.454 [INFO][4586] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0", GenerateName:"calico-kube-controllers-74bd894b84-", Namespace:"calico-system", SelfLink:"", UID:"1d007c43-f99e-43db-8dd1-f5d56f04a788", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74bd894b84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"", Pod:"calico-kube-controllers-74bd894b84-mmqcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4afec9ff63", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:58.504284 containerd[1582]: 2026-01-28 00:02:58.454 [INFO][4586] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.117.200/32] ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.504284 containerd[1582]: 2026-01-28 00:02:58.454 [INFO][4586] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie4afec9ff63 ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.504284 containerd[1582]: 2026-01-28 00:02:58.481 [INFO][4586] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.504449 containerd[1582]: 2026-01-28 00:02:58.482 [INFO][4586] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0", GenerateName:"calico-kube-controllers-74bd894b84-", Namespace:"calico-system", SelfLink:"", UID:"1d007c43-f99e-43db-8dd1-f5d56f04a788", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 2, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74bd894b84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4593-0-0-n-20383d5ef7", ContainerID:"3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f", Pod:"calico-kube-controllers-74bd894b84-mmqcv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.117.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie4afec9ff63", MAC:"3e:61:c8:b4:86:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 00:02:58.504449 containerd[1582]: 2026-01-28 00:02:58.496 [INFO][4586] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" Namespace="calico-system" Pod="calico-kube-controllers-74bd894b84-mmqcv" WorkloadEndpoint="ci--4593--0--0--n--20383d5ef7-k8s-calico--kube--controllers--74bd894b84--mmqcv-eth0" Jan 28 00:02:58.541950 containerd[1582]: time="2026-01-28T00:02:58.541904340Z" level=info msg="connecting to shim 3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f" address="unix:///run/containerd/s/37caee32513c091fa636af41450a16fcedcb34e7d6af3575603b74693bb87ff8" namespace=k8s.io protocol=ttrpc version=3 Jan 28 00:02:58.565000 audit[4703]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:58.565000 audit[4703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffec951a90 a2=0 a3=1 items=0 ppid=2849 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.565000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:58.575000 audit[4703]: NETFILTER_CFG table=nat:140 family=2 entries=44 op=nft_register_rule pid=4703 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:58.575000 audit[4703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffec951a90 a2=0 a3=1 items=0 ppid=2849 pid=4703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.575000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:58.586980 systemd[1]: Started cri-containerd-3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f.scope - libcontainer container 3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f. Jan 28 00:02:58.587000 audit[4711]: NETFILTER_CFG table=filter:141 family=2 entries=52 op=nft_register_chain pid=4711 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 28 00:02:58.587000 audit[4711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24312 a0=3 a1=ffffdccca2a0 a2=0 a3=ffffbea78fa8 items=0 ppid=3853 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.587000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 28 00:02:58.591000 audit: BPF prog-id=246 op=LOAD Jan 28 00:02:58.592000 audit: BPF prog-id=247 op=LOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.592000 audit: BPF prog-id=247 op=UNLOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.592000 audit: BPF prog-id=248 op=LOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.592000 audit: BPF prog-id=249 op=LOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.592000 audit: BPF prog-id=249 op=UNLOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.592000 audit: BPF prog-id=248 op=UNLOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.592000 audit: BPF prog-id=250 op=LOAD Jan 28 00:02:58.592000 audit[4649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4637 pid=4649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.592000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633333331323638393037323863613361306137646334333766343839 Jan 28 00:02:58.608000 audit: BPF prog-id=251 op=LOAD Jan 28 00:02:58.608000 audit: BPF prog-id=252 op=LOAD Jan 28 00:02:58.608000 audit[4698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.608000 audit: BPF prog-id=252 op=UNLOAD Jan 28 00:02:58.608000 audit[4698]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.608000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.609000 audit: BPF prog-id=253 op=LOAD Jan 28 00:02:58.609000 audit[4698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.609000 audit: BPF prog-id=254 op=LOAD Jan 28 00:02:58.609000 audit[4698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.609000 audit: BPF prog-id=254 op=UNLOAD Jan 28 00:02:58.609000 audit[4698]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.609000 audit: BPF prog-id=253 op=UNLOAD Jan 28 00:02:58.609000 audit[4698]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.609000 audit: BPF prog-id=255 op=LOAD Jan 28 00:02:58.609000 audit[4698]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4684 pid=4698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:58.609000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366363736376131363565623461613033306662633962323639646561 Jan 28 00:02:58.644969 containerd[1582]: time="2026-01-28T00:02:58.644908070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-h6h5c,Uid:bec7677b-1e96-4176-929f-6c7596f75411,Namespace:calico-system,Attempt:0,} returns sandbox id \"6333126890728ca3a0a7dc437f489abd06304ba61e533940214e6cc5011d4408\"" Jan 28 00:02:58.648443 containerd[1582]: time="2026-01-28T00:02:58.648403834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:02:58.661901 containerd[1582]: time="2026-01-28T00:02:58.661735296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74bd894b84-mmqcv,Uid:1d007c43-f99e-43db-8dd1-f5d56f04a788,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f6767a165eb4aa030fbc9b269deaef89914e1d620b955a03cae7b2cffbdc23f\"" Jan 28 00:02:58.939267 systemd-networkd[1464]: caliba7f4c1eb4e: Gained IPv6LL Jan 28 00:02:58.999438 containerd[1582]: time="2026-01-28T00:02:58.999362222Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:59.001630 containerd[1582]: time="2026-01-28T00:02:59.001465880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:02:59.001842 containerd[1582]: time="2026-01-28T00:02:59.001492441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:59.002333 kubelet[2747]: E0128 00:02:59.002203 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:02:59.002333 kubelet[2747]: E0128 00:02:59.002264 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:02:59.002585 kubelet[2747]: E0128 00:02:59.002500 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52vxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h6h5c_calico-system(bec7677b-1e96-4176-929f-6c7596f75411): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:59.002942 systemd-networkd[1464]: cali9fb81c0271c: Gained IPv6LL Jan 28 00:02:59.003373 containerd[1582]: time="2026-01-28T00:02:59.003289925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:02:59.004421 kubelet[2747]: E0128 00:02:59.004176 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:02:59.132215 systemd-networkd[1464]: calid618a0e7388: Gained IPv6LL Jan 28 00:02:59.342264 containerd[1582]: time="2026-01-28T00:02:59.342082939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:02:59.345048 containerd[1582]: time="2026-01-28T00:02:59.344846427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:02:59.345048 containerd[1582]: time="2026-01-28T00:02:59.344975793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:02:59.345740 kubelet[2747]: E0128 00:02:59.345552 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:02:59.345740 kubelet[2747]: E0128 00:02:59.345690 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:02:59.346337 kubelet[2747]: E0128 00:02:59.346214 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxrkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74bd894b84-mmqcv_calico-system(1d007c43-f99e-43db-8dd1-f5d56f04a788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:02:59.347878 kubelet[2747]: E0128 00:02:59.347504 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:02:59.417294 kubelet[2747]: E0128 00:02:59.416741 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:02:59.422301 kubelet[2747]: E0128 00:02:59.421113 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:02:59.422494 kubelet[2747]: E0128 00:02:59.422335 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:02:59.422801 kubelet[2747]: E0128 00:02:59.422763 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:02:59.501924 kernel: kauditd_printk_skb: 227 callbacks suppressed Jan 28 00:02:59.502066 kernel: audit: type=1325 audit(1769558579.498:738): table=filter:142 family=2 entries=14 op=nft_register_rule pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:59.498000 audit[4732]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:59.498000 audit[4732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5fd8eb0 a2=0 a3=1 items=0 ppid=2849 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:59.507626 kernel: audit: type=1300 audit(1769558579.498:738): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5fd8eb0 a2=0 a3=1 items=0 ppid=2849 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:59.498000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:59.509809 kernel: audit: type=1327 audit(1769558579.498:738): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:59.521000 audit[4732]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:59.521000 audit[4732]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe5fd8eb0 a2=0 a3=1 items=0 ppid=2849 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:59.526358 kernel: audit: type=1325 audit(1769558579.521:739): table=nat:143 family=2 entries=56 op=nft_register_chain pid=4732 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:02:59.526463 kernel: audit: type=1300 audit(1769558579.521:739): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe5fd8eb0 a2=0 a3=1 items=0 ppid=2849 pid=4732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:02:59.526487 kernel: audit: type=1327 audit(1769558579.521:739): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:59.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:02:59.963495 systemd-networkd[1464]: calie4afec9ff63: Gained IPv6LL Jan 28 00:03:00.154805 systemd-networkd[1464]: calif2c32e2c532: Gained IPv6LL Jan 28 00:03:00.435096 kubelet[2747]: E0128 00:03:00.433636 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:03:00.439493 kubelet[2747]: E0128 00:03:00.439336 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:03:00.550000 audit[4739]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:03:00.550000 audit[4739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff2243660 a2=0 a3=1 items=0 ppid=2849 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:03:00.558082 kernel: audit: type=1325 audit(1769558580.550:740): table=filter:144 family=2 entries=14 op=nft_register_rule pid=4739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:03:00.558356 kernel: audit: type=1300 audit(1769558580.550:740): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff2243660 a2=0 a3=1 items=0 ppid=2849 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:03:00.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:03:00.560923 kernel: audit: type=1327 audit(1769558580.550:740): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:03:00.561230 kernel: audit: type=1325 audit(1769558580.558:741): table=nat:145 family=2 entries=20 op=nft_register_rule pid=4739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:03:00.558000 audit[4739]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:03:00.558000 audit[4739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff2243660 a2=0 a3=1 items=0 ppid=2849 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:03:00.558000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:03:04.108614 containerd[1582]: time="2026-01-28T00:03:04.108506786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:03:04.448424 containerd[1582]: time="2026-01-28T00:03:04.448371604Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:04.450066 containerd[1582]: time="2026-01-28T00:03:04.449992204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:03:04.450537 containerd[1582]: time="2026-01-28T00:03:04.450487420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:04.451028 kubelet[2747]: E0128 00:03:04.450903 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:03:04.452057 kubelet[2747]: E0128 00:03:04.451638 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:03:04.452057 kubelet[2747]: E0128 00:03:04.451788 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5bb067f95b524ace93b37ce99b4fd669,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:04.454980 containerd[1582]: time="2026-01-28T00:03:04.454940402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:03:04.803141 containerd[1582]: time="2026-01-28T00:03:04.802662394Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:04.805015 containerd[1582]: time="2026-01-28T00:03:04.804920563Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:03:04.805197 containerd[1582]: time="2026-01-28T00:03:04.805062556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:04.805337 kubelet[2747]: E0128 00:03:04.805272 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:03:04.805457 kubelet[2747]: E0128 00:03:04.805341 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:03:04.805501 kubelet[2747]: E0128 00:03:04.805467 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:04.807064 kubelet[2747]: E0128 00:03:04.806950 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:03:09.108576 containerd[1582]: time="2026-01-28T00:03:09.108403300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:03:09.468256 containerd[1582]: time="2026-01-28T00:03:09.467803187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:09.469968 containerd[1582]: time="2026-01-28T00:03:09.469872669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:03:09.470158 containerd[1582]: time="2026-01-28T00:03:09.470009184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:09.470458 kubelet[2747]: E0128 00:03:09.470347 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:03:09.470458 kubelet[2747]: E0128 00:03:09.470408 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:03:09.471659 kubelet[2747]: E0128 00:03:09.470558 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:09.474067 containerd[1582]: time="2026-01-28T00:03:09.473761803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:03:09.824850 containerd[1582]: time="2026-01-28T00:03:09.824676608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:09.826375 containerd[1582]: time="2026-01-28T00:03:09.826280228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:09.826679 containerd[1582]: time="2026-01-28T00:03:09.826279908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:03:09.826998 kubelet[2747]: E0128 00:03:09.826877 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:03:09.827114 kubelet[2747]: E0128 00:03:09.827026 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:03:09.833007 kubelet[2747]: E0128 00:03:09.832904 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:09.834367 kubelet[2747]: E0128 00:03:09.834304 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:03:11.110657 containerd[1582]: time="2026-01-28T00:03:11.110323439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:03:11.460966 containerd[1582]: time="2026-01-28T00:03:11.458568221Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:11.464614 containerd[1582]: time="2026-01-28T00:03:11.464360108Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:03:11.464614 containerd[1582]: time="2026-01-28T00:03:11.464478664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:11.465374 kubelet[2747]: E0128 00:03:11.465321 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:11.465892 kubelet[2747]: E0128 00:03:11.465843 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:11.466275 kubelet[2747]: E0128 00:03:11.466230 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r599k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-5lsmb_calico-apiserver(137246fc-f131-4552-a311-34e5752765be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:11.468577 kubelet[2747]: E0128 00:03:11.468404 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:03:12.116219 containerd[1582]: time="2026-01-28T00:03:12.115486212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:03:12.467630 containerd[1582]: time="2026-01-28T00:03:12.467335606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:12.469780 containerd[1582]: time="2026-01-28T00:03:12.469589095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:03:12.470136 containerd[1582]: time="2026-01-28T00:03:12.469740930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:12.470322 kubelet[2747]: E0128 00:03:12.470257 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:12.470322 kubelet[2747]: E0128 00:03:12.470318 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:12.472235 kubelet[2747]: E0128 00:03:12.470542 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l67bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-7vcbz_calico-apiserver(07cd90a0-de7e-4c03-9b09-e4adb1ab3e71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:12.473124 kubelet[2747]: E0128 00:03:12.472090 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:03:12.473650 containerd[1582]: time="2026-01-28T00:03:12.473617328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:03:12.803843 containerd[1582]: time="2026-01-28T00:03:12.803661207Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:12.806776 containerd[1582]: time="2026-01-28T00:03:12.806588635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:03:12.806776 containerd[1582]: time="2026-01-28T00:03:12.806713271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:12.807867 kubelet[2747]: E0128 00:03:12.807148 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:03:12.807867 kubelet[2747]: E0128 00:03:12.807206 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:03:12.807867 kubelet[2747]: E0128 00:03:12.807336 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxrkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74bd894b84-mmqcv_calico-system(1d007c43-f99e-43db-8dd1-f5d56f04a788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:12.809755 kubelet[2747]: E0128 00:03:12.809181 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:03:13.107893 containerd[1582]: time="2026-01-28T00:03:13.107394674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:03:13.458634 containerd[1582]: time="2026-01-28T00:03:13.458393724Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:13.460040 containerd[1582]: time="2026-01-28T00:03:13.459835441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:03:13.460040 containerd[1582]: time="2026-01-28T00:03:13.459901119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:13.460498 kubelet[2747]: E0128 00:03:13.460221 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:03:13.460498 kubelet[2747]: E0128 00:03:13.460281 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:03:13.460903 kubelet[2747]: E0128 00:03:13.460467 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52vxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h6h5c_calico-system(bec7677b-1e96-4176-929f-6c7596f75411): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:13.461732 kubelet[2747]: E0128 00:03:13.461677 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:03:18.110412 kubelet[2747]: E0128 00:03:18.110330 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:03:21.110838 kubelet[2747]: E0128 00:03:21.110765 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:03:25.108375 kubelet[2747]: E0128 00:03:25.108303 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:03:25.111403 kubelet[2747]: E0128 00:03:25.111321 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:03:26.110093 kubelet[2747]: E0128 00:03:26.110020 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:03:27.108658 kubelet[2747]: E0128 00:03:27.107410 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:03:32.111860 containerd[1582]: time="2026-01-28T00:03:32.110989736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:03:32.469862 containerd[1582]: time="2026-01-28T00:03:32.469781996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:32.471540 containerd[1582]: time="2026-01-28T00:03:32.471385712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:03:32.471540 containerd[1582]: time="2026-01-28T00:03:32.471425072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:32.472755 kubelet[2747]: E0128 00:03:32.472701 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:03:32.473330 kubelet[2747]: E0128 00:03:32.473140 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:03:32.473477 kubelet[2747]: E0128 00:03:32.473369 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5bb067f95b524ace93b37ce99b4fd669,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:32.474036 containerd[1582]: time="2026-01-28T00:03:32.474000225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:03:32.817434 containerd[1582]: time="2026-01-28T00:03:32.817291407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:32.825246 containerd[1582]: time="2026-01-28T00:03:32.825149665Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:03:32.825409 containerd[1582]: time="2026-01-28T00:03:32.825306465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:32.826054 kubelet[2747]: E0128 00:03:32.825951 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:03:32.826054 kubelet[2747]: E0128 00:03:32.826045 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:03:32.826986 kubelet[2747]: E0128 00:03:32.826388 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:32.827170 containerd[1582]: time="2026-01-28T00:03:32.826692821Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:03:33.158897 containerd[1582]: time="2026-01-28T00:03:33.158032035Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:33.160469 containerd[1582]: time="2026-01-28T00:03:33.159870832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:03:33.160469 containerd[1582]: time="2026-01-28T00:03:33.160421471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:33.160759 kubelet[2747]: E0128 00:03:33.160692 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:03:33.160806 kubelet[2747]: E0128 00:03:33.160769 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:03:33.161697 kubelet[2747]: E0128 00:03:33.161087 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:33.162650 containerd[1582]: time="2026-01-28T00:03:33.162153268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:03:33.162775 kubelet[2747]: E0128 00:03:33.162312 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:03:33.613245 containerd[1582]: time="2026-01-28T00:03:33.613138171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:33.617617 containerd[1582]: time="2026-01-28T00:03:33.616544205Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:03:33.617617 containerd[1582]: time="2026-01-28T00:03:33.616546445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:33.618616 kubelet[2747]: E0128 00:03:33.618112 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:03:33.618616 kubelet[2747]: E0128 00:03:33.618262 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:03:33.618616 kubelet[2747]: E0128 00:03:33.618547 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:33.620015 kubelet[2747]: E0128 00:03:33.619828 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:03:38.113161 containerd[1582]: time="2026-01-28T00:03:38.112410298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:03:38.471029 containerd[1582]: time="2026-01-28T00:03:38.470962168Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:38.473170 containerd[1582]: time="2026-01-28T00:03:38.473048174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:03:38.473385 containerd[1582]: time="2026-01-28T00:03:38.473097494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:38.473624 kubelet[2747]: E0128 00:03:38.473492 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:38.473624 kubelet[2747]: E0128 00:03:38.473573 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:38.474750 kubelet[2747]: E0128 00:03:38.474609 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r599k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-5lsmb_calico-apiserver(137246fc-f131-4552-a311-34e5752765be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:38.476213 kubelet[2747]: E0128 00:03:38.476135 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:03:39.109979 containerd[1582]: time="2026-01-28T00:03:39.108836210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:03:39.465493 containerd[1582]: time="2026-01-28T00:03:39.464952371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:39.467266 containerd[1582]: time="2026-01-28T00:03:39.467067338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:03:39.467266 containerd[1582]: time="2026-01-28T00:03:39.467158579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:39.467551 kubelet[2747]: E0128 00:03:39.467418 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:03:39.467551 kubelet[2747]: E0128 00:03:39.467481 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:03:39.467734 kubelet[2747]: E0128 00:03:39.467653 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52vxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h6h5c_calico-system(bec7677b-1e96-4176-929f-6c7596f75411): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:39.469559 kubelet[2747]: E0128 00:03:39.469307 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:03:40.114715 containerd[1582]: time="2026-01-28T00:03:40.114633831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:03:40.469050 containerd[1582]: time="2026-01-28T00:03:40.468973192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:40.470692 containerd[1582]: time="2026-01-28T00:03:40.470629359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:03:40.470870 containerd[1582]: time="2026-01-28T00:03:40.470674319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:40.471047 kubelet[2747]: E0128 00:03:40.470985 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:40.471454 kubelet[2747]: E0128 00:03:40.471057 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:03:40.471454 kubelet[2747]: E0128 00:03:40.471241 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l67bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-7vcbz_calico-apiserver(07cd90a0-de7e-4c03-9b09-e4adb1ab3e71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:40.472817 kubelet[2747]: E0128 00:03:40.472756 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:03:41.109765 containerd[1582]: time="2026-01-28T00:03:41.108310163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:03:41.460543 containerd[1582]: time="2026-01-28T00:03:41.460071867Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:03:41.462758 containerd[1582]: time="2026-01-28T00:03:41.462692001Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:03:41.463134 containerd[1582]: time="2026-01-28T00:03:41.462742601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:03:41.463250 kubelet[2747]: E0128 00:03:41.463206 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:03:41.463364 kubelet[2747]: E0128 00:03:41.463277 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:03:41.464227 kubelet[2747]: E0128 00:03:41.463430 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxrkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74bd894b84-mmqcv_calico-system(1d007c43-f99e-43db-8dd1-f5d56f04a788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:03:41.465426 kubelet[2747]: E0128 00:03:41.465358 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:03:48.112996 kubelet[2747]: E0128 00:03:48.112794 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:03:48.114916 kubelet[2747]: E0128 00:03:48.114834 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:03:51.107098 kubelet[2747]: E0128 00:03:51.107014 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:03:52.110142 kubelet[2747]: E0128 00:03:52.109655 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:03:54.110847 kubelet[2747]: E0128 00:03:54.110155 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:03:55.107917 kubelet[2747]: E0128 00:03:55.107845 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:03:55.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-159.69.123.112:22-100.24.34.255:32920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:03:55.311233 systemd[1]: Started sshd@7-159.69.123.112:22-100.24.34.255:32920.service - OpenSSH per-connection server daemon (100.24.34.255:32920). Jan 28 00:03:55.314839 kernel: kauditd_printk_skb: 2 callbacks suppressed Jan 28 00:03:55.314980 kernel: audit: type=1130 audit(1769558635.310:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-159.69.123.112:22-100.24.34.255:32920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:03:59.109894 kubelet[2747]: E0128 00:03:59.108994 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:03:59.110759 kubelet[2747]: E0128 00:03:59.108998 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:04:00.021817 sshd[4826]: Connection closed by 100.24.34.255 port 32920 [preauth] Jan 28 00:04:00.025206 systemd[1]: sshd@7-159.69.123.112:22-100.24.34.255:32920.service: Deactivated successfully. Jan 28 00:04:00.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-159.69.123.112:22-100.24.34.255:32920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:00.030625 kernel: audit: type=1131 audit(1769558640.025:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-159.69.123.112:22-100.24.34.255:32920 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:02.107772 kubelet[2747]: E0128 00:04:02.107390 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:04:04.112654 kubelet[2747]: E0128 00:04:04.111361 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:04:05.107944 kubelet[2747]: E0128 00:04:05.107625 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:04:08.110394 kubelet[2747]: E0128 00:04:08.110304 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:04:10.111144 kubelet[2747]: E0128 00:04:10.111016 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:04:12.109230 kubelet[2747]: E0128 00:04:12.109141 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:04:17.109514 kubelet[2747]: E0128 00:04:17.107734 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:04:17.110451 kubelet[2747]: E0128 00:04:17.110379 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:04:19.109309 kubelet[2747]: E0128 00:04:19.109223 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:04:20.108987 kubelet[2747]: E0128 00:04:20.108910 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:04:23.109723 containerd[1582]: time="2026-01-28T00:04:23.109547414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 00:04:23.475745 containerd[1582]: time="2026-01-28T00:04:23.475650101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:23.479408 containerd[1582]: time="2026-01-28T00:04:23.479338267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 00:04:23.479564 containerd[1582]: time="2026-01-28T00:04:23.479465109Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:23.480437 kubelet[2747]: E0128 00:04:23.480179 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:04:23.480437 kubelet[2747]: E0128 00:04:23.480243 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 00:04:23.480437 kubelet[2747]: E0128 00:04:23.480368 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5bb067f95b524ace93b37ce99b4fd669,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:23.482671 containerd[1582]: time="2026-01-28T00:04:23.482625623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 00:04:23.830715 containerd[1582]: time="2026-01-28T00:04:23.829492743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:23.833501 containerd[1582]: time="2026-01-28T00:04:23.833313792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 00:04:23.833501 containerd[1582]: time="2026-01-28T00:04:23.833445075Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:23.835614 kubelet[2747]: E0128 00:04:23.833907 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:04:23.835614 kubelet[2747]: E0128 00:04:23.833968 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 00:04:23.835614 kubelet[2747]: E0128 00:04:23.834098 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8965z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-558b9cfcb8-hltp7_calico-system(28c2487d-b7dc-46cd-9e63-ec0ab10a8703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:23.835614 kubelet[2747]: E0128 00:04:23.835309 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:04:24.109229 containerd[1582]: time="2026-01-28T00:04:24.107760575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 00:04:24.467338 containerd[1582]: time="2026-01-28T00:04:24.467227380Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:24.470684 containerd[1582]: time="2026-01-28T00:04:24.470526017Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 00:04:24.470684 containerd[1582]: time="2026-01-28T00:04:24.470618539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:24.471166 kubelet[2747]: E0128 00:04:24.471035 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:04:24.471166 kubelet[2747]: E0128 00:04:24.471104 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 00:04:24.471912 kubelet[2747]: E0128 00:04:24.471238 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:24.474537 containerd[1582]: time="2026-01-28T00:04:24.474407708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 00:04:24.822719 containerd[1582]: time="2026-01-28T00:04:24.822479926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:24.824790 containerd[1582]: time="2026-01-28T00:04:24.824560615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 00:04:24.824790 containerd[1582]: time="2026-01-28T00:04:24.824725419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:24.825052 kubelet[2747]: E0128 00:04:24.824964 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:04:24.825497 kubelet[2747]: E0128 00:04:24.825082 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 00:04:24.825497 kubelet[2747]: E0128 00:04:24.825247 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5ncft,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w6mgn_calico-system(2955546a-cb98-4307-9f9a-44877b3e7017): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:24.826748 kubelet[2747]: E0128 00:04:24.826695 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:04:28.108227 containerd[1582]: time="2026-01-28T00:04:28.107994082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:04:28.454399 containerd[1582]: time="2026-01-28T00:04:28.454341036Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:28.455893 containerd[1582]: time="2026-01-28T00:04:28.455840072Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:04:28.455992 containerd[1582]: time="2026-01-28T00:04:28.455953955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:28.456644 kubelet[2747]: E0128 00:04:28.456268 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:04:28.456644 kubelet[2747]: E0128 00:04:28.456332 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:04:28.456644 kubelet[2747]: E0128 00:04:28.456510 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r599k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-5lsmb_calico-apiserver(137246fc-f131-4552-a311-34e5752765be): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:28.457960 kubelet[2747]: E0128 00:04:28.457810 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:04:30.112875 containerd[1582]: time="2026-01-28T00:04:30.112820968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 00:04:30.472221 containerd[1582]: time="2026-01-28T00:04:30.472136275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:30.475527 containerd[1582]: time="2026-01-28T00:04:30.475443276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 00:04:30.475689 containerd[1582]: time="2026-01-28T00:04:30.475496037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:30.476132 kubelet[2747]: E0128 00:04:30.476089 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:04:30.476628 kubelet[2747]: E0128 00:04:30.476150 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 00:04:30.476628 kubelet[2747]: E0128 00:04:30.476459 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-52vxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-h6h5c_calico-system(bec7677b-1e96-4176-929f-6c7596f75411): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:30.477902 kubelet[2747]: E0128 00:04:30.477854 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:04:34.114023 containerd[1582]: time="2026-01-28T00:04:34.113549781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 00:04:34.475393 containerd[1582]: time="2026-01-28T00:04:34.474790596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:34.476804 containerd[1582]: time="2026-01-28T00:04:34.476612402Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 00:04:34.476804 containerd[1582]: time="2026-01-28T00:04:34.476746885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:34.477290 kubelet[2747]: E0128 00:04:34.477200 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:04:34.477860 kubelet[2747]: E0128 00:04:34.477494 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 00:04:34.478333 kubelet[2747]: E0128 00:04:34.478184 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l67bs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-697c7bd8db-7vcbz_calico-apiserver(07cd90a0-de7e-4c03-9b09-e4adb1ab3e71): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:34.479751 kubelet[2747]: E0128 00:04:34.479693 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:04:35.109506 containerd[1582]: time="2026-01-28T00:04:35.108793664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 00:04:35.454029 containerd[1582]: time="2026-01-28T00:04:35.453969326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 28 00:04:35.457475 containerd[1582]: time="2026-01-28T00:04:35.457330211Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 28 00:04:35.457475 containerd[1582]: time="2026-01-28T00:04:35.457340411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 00:04:35.458775 kubelet[2747]: E0128 00:04:35.458735 2747 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:04:35.459079 kubelet[2747]: E0128 00:04:35.458876 2747 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 00:04:35.459079 kubelet[2747]: E0128 00:04:35.459021 2747 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxrkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74bd894b84-mmqcv_calico-system(1d007c43-f99e-43db-8dd1-f5d56f04a788): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 00:04:35.460366 kubelet[2747]: E0128 00:04:35.460317 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:04:36.082133 systemd[1]: Started sshd@8-159.69.123.112:22-20.161.92.111:58090.service - OpenSSH per-connection server daemon (20.161.92.111:58090). Jan 28 00:04:36.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-159.69.123.112:22-20.161.92.111:58090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:36.087623 kernel: audit: type=1130 audit(1769558676.081:744): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-159.69.123.112:22-20.161.92.111:58090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:36.631000 audit[4897]: USER_ACCT pid=4897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.634447 sshd[4897]: Accepted publickey for core from 20.161.92.111 port 58090 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:36.636206 sshd-session[4897]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:36.634000 audit[4897]: CRED_ACQ pid=4897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.639655 kernel: audit: type=1101 audit(1769558676.631:745): pid=4897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.639805 kernel: audit: type=1103 audit(1769558676.634:746): pid=4897 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.642182 kernel: audit: type=1006 audit(1769558676.634:747): pid=4897 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 28 00:04:36.642348 kernel: audit: type=1300 audit(1769558676.634:747): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe25ceca0 a2=3 a3=0 items=0 ppid=1 pid=4897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:36.634000 audit[4897]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe25ceca0 a2=3 a3=0 items=0 ppid=1 pid=4897 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:36.646618 kernel: audit: type=1327 audit(1769558676.634:747): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:36.634000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:36.653105 systemd-logind[1561]: New session 9 of user core. Jan 28 00:04:36.661709 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:04:36.667000 audit[4897]: USER_START pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.673000 audit[4901]: CRED_ACQ pid=4901 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.676519 kernel: audit: type=1105 audit(1769558676.667:748): pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:36.676668 kernel: audit: type=1103 audit(1769558676.673:749): pid=4901 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:37.057009 sshd[4901]: Connection closed by 20.161.92.111 port 58090 Jan 28 00:04:37.057790 sshd-session[4897]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:37.060000 audit[4897]: USER_END pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:37.062000 audit[4897]: CRED_DISP pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:37.066696 kernel: audit: type=1106 audit(1769558677.060:750): pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:37.067550 systemd[1]: sshd@8-159.69.123.112:22-20.161.92.111:58090.service: Deactivated successfully. Jan 28 00:04:37.072898 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:04:37.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-159.69.123.112:22-20.161.92.111:58090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:37.073615 kernel: audit: type=1104 audit(1769558677.062:751): pid=4897 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:37.076827 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:04:37.081994 systemd-logind[1561]: Removed session 9. Jan 28 00:04:37.111857 kubelet[2747]: E0128 00:04:37.111062 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:04:39.109276 kubelet[2747]: E0128 00:04:39.108766 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:04:39.111989 kubelet[2747]: E0128 00:04:39.109802 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:04:42.164813 systemd[1]: Started sshd@9-159.69.123.112:22-20.161.92.111:58092.service - OpenSSH per-connection server daemon (20.161.92.111:58092). Jan 28 00:04:42.170797 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:04:42.170860 kernel: audit: type=1130 audit(1769558682.164:753): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-159.69.123.112:22-20.161.92.111:58092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:42.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-159.69.123.112:22-20.161.92.111:58092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:42.701000 audit[4919]: USER_ACCT pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.706730 sshd[4919]: Accepted publickey for core from 20.161.92.111 port 58092 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:42.711626 kernel: audit: type=1101 audit(1769558682.701:754): pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.711764 kernel: audit: type=1103 audit(1769558682.708:755): pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.708000 audit[4919]: CRED_ACQ pid=4919 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.710145 sshd-session[4919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:42.715907 kernel: audit: type=1006 audit(1769558682.708:756): pid=4919 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 28 00:04:42.719020 kernel: audit: type=1300 audit(1769558682.708:756): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc5115a0 a2=3 a3=0 items=0 ppid=1 pid=4919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:42.708000 audit[4919]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffc5115a0 a2=3 a3=0 items=0 ppid=1 pid=4919 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:42.708000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:42.720853 systemd-logind[1561]: New session 10 of user core. Jan 28 00:04:42.722646 kernel: audit: type=1327 audit(1769558682.708:756): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:42.727884 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:04:42.737000 audit[4919]: USER_START pid=4919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.741000 audit[4923]: CRED_ACQ pid=4923 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.744858 kernel: audit: type=1105 audit(1769558682.737:757): pid=4919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:42.744968 kernel: audit: type=1103 audit(1769558682.741:758): pid=4923 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:43.117689 sshd[4923]: Connection closed by 20.161.92.111 port 58092 Jan 28 00:04:43.118723 sshd-session[4919]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:43.123000 audit[4919]: USER_END pid=4919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:43.127922 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:04:43.123000 audit[4919]: CRED_DISP pid=4919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:43.130766 kernel: audit: type=1106 audit(1769558683.123:759): pid=4919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:43.130868 kernel: audit: type=1104 audit(1769558683.123:760): pid=4919 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:43.130896 systemd[1]: sshd@9-159.69.123.112:22-20.161.92.111:58092.service: Deactivated successfully. Jan 28 00:04:43.130000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-159.69.123.112:22-20.161.92.111:58092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:43.135060 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:04:43.138553 systemd-logind[1561]: Removed session 10. Jan 28 00:04:45.109426 kubelet[2747]: E0128 00:04:45.109330 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:04:48.228674 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:04:48.228822 kernel: audit: type=1130 audit(1769558688.226:762): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-159.69.123.112:22-20.161.92.111:45482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:48.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-159.69.123.112:22-20.161.92.111:45482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:48.226770 systemd[1]: Started sshd@10-159.69.123.112:22-20.161.92.111:45482.service - OpenSSH per-connection server daemon (20.161.92.111:45482). Jan 28 00:04:48.754000 audit[4936]: USER_ACCT pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.761939 kernel: audit: type=1101 audit(1769558688.754:763): pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.762109 sshd[4936]: Accepted publickey for core from 20.161.92.111 port 45482 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:48.762000 audit[4936]: CRED_ACQ pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.764061 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:48.768085 kernel: audit: type=1103 audit(1769558688.762:764): pid=4936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.768225 kernel: audit: type=1006 audit(1769558688.762:765): pid=4936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 28 00:04:48.771324 kernel: audit: type=1300 audit(1769558688.762:765): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3783fe0 a2=3 a3=0 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:48.762000 audit[4936]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3783fe0 a2=3 a3=0 items=0 ppid=1 pid=4936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:48.762000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:48.772787 kernel: audit: type=1327 audit(1769558688.762:765): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:48.780250 systemd-logind[1561]: New session 11 of user core. Jan 28 00:04:48.785931 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:04:48.790000 audit[4936]: USER_START pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.797472 kernel: audit: type=1105 audit(1769558688.790:766): pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.797626 kernel: audit: type=1103 audit(1769558688.791:767): pid=4940 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:48.791000 audit[4940]: CRED_ACQ pid=4940 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.109020 kubelet[2747]: E0128 00:04:49.108861 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:04:49.110616 kubelet[2747]: E0128 00:04:49.109697 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:04:49.148555 sshd[4940]: Connection closed by 20.161.92.111 port 45482 Jan 28 00:04:49.149272 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:49.150000 audit[4936]: USER_END pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.150000 audit[4936]: CRED_DISP pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.154700 kernel: audit: type=1106 audit(1769558689.150:768): pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.157620 kernel: audit: type=1104 audit(1769558689.150:769): pid=4936 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.159241 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Jan 28 00:04:49.162779 systemd[1]: sshd@10-159.69.123.112:22-20.161.92.111:45482.service: Deactivated successfully. Jan 28 00:04:49.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-159.69.123.112:22-20.161.92.111:45482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:49.175725 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 00:04:49.179665 systemd-logind[1561]: Removed session 11. Jan 28 00:04:49.257947 systemd[1]: Started sshd@11-159.69.123.112:22-20.161.92.111:45490.service - OpenSSH per-connection server daemon (20.161.92.111:45490). Jan 28 00:04:49.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-159.69.123.112:22-20.161.92.111:45490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:49.792000 audit[4952]: USER_ACCT pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.794091 sshd[4952]: Accepted publickey for core from 20.161.92.111 port 45490 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:49.795000 audit[4952]: CRED_ACQ pid=4952 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.795000 audit[4952]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd91d2140 a2=3 a3=0 items=0 ppid=1 pid=4952 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:49.795000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:49.798425 sshd-session[4952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:49.806127 systemd-logind[1561]: New session 12 of user core. Jan 28 00:04:49.813988 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 00:04:49.819000 audit[4952]: USER_START pid=4952 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:49.824000 audit[4956]: CRED_ACQ pid=4956 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:50.115770 kubelet[2747]: E0128 00:04:50.115545 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:04:50.276520 sshd[4956]: Connection closed by 20.161.92.111 port 45490 Jan 28 00:04:50.280783 sshd-session[4952]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:50.281000 audit[4952]: USER_END pid=4952 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:50.282000 audit[4952]: CRED_DISP pid=4952 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:50.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-159.69.123.112:22-20.161.92.111:45490 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:50.290530 systemd[1]: sshd@11-159.69.123.112:22-20.161.92.111:45490.service: Deactivated successfully. Jan 28 00:04:50.295468 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 00:04:50.296738 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Jan 28 00:04:50.299213 systemd-logind[1561]: Removed session 12. Jan 28 00:04:50.397655 systemd[1]: Started sshd@12-159.69.123.112:22-20.161.92.111:45492.service - OpenSSH per-connection server daemon (20.161.92.111:45492). Jan 28 00:04:50.396000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-159.69.123.112:22-20.161.92.111:45492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:50.929000 audit[4987]: USER_ACCT pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:50.933743 sshd[4987]: Accepted publickey for core from 20.161.92.111 port 45492 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:50.934000 audit[4987]: CRED_ACQ pid=4987 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:50.934000 audit[4987]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf0be8f0 a2=3 a3=0 items=0 ppid=1 pid=4987 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:50.934000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:50.937230 sshd-session[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:50.949308 systemd-logind[1561]: New session 13 of user core. Jan 28 00:04:50.953999 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 00:04:50.957000 audit[4987]: USER_START pid=4987 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:50.960000 audit[4998]: CRED_ACQ pid=4998 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:51.109313 kubelet[2747]: E0128 00:04:51.109251 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:04:51.350144 sshd[4998]: Connection closed by 20.161.92.111 port 45492 Jan 28 00:04:51.350481 sshd-session[4987]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:51.353000 audit[4987]: USER_END pid=4987 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:51.353000 audit[4987]: CRED_DISP pid=4987 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:51.360892 systemd[1]: sshd@12-159.69.123.112:22-20.161.92.111:45492.service: Deactivated successfully. Jan 28 00:04:51.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-159.69.123.112:22-20.161.92.111:45492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:51.367546 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 00:04:51.369384 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Jan 28 00:04:51.371643 systemd-logind[1561]: Removed session 13. Jan 28 00:04:53.109684 kubelet[2747]: E0128 00:04:53.109617 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:04:56.463621 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 28 00:04:56.464209 kernel: audit: type=1130 audit(1769558696.459:789): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-159.69.123.112:22-20.161.92.111:48838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:56.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-159.69.123.112:22-20.161.92.111:48838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:56.460670 systemd[1]: Started sshd@13-159.69.123.112:22-20.161.92.111:48838.service - OpenSSH per-connection server daemon (20.161.92.111:48838). Jan 28 00:04:56.994000 audit[5010]: USER_ACCT pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:56.998586 sshd[5010]: Accepted publickey for core from 20.161.92.111 port 48838 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:57.003831 kernel: audit: type=1101 audit(1769558696.994:790): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.003959 kernel: audit: type=1103 audit(1769558696.998:791): pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:56.998000 audit[5010]: CRED_ACQ pid=5010 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.002463 sshd-session[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:57.006173 kernel: audit: type=1006 audit(1769558696.999:792): pid=5010 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 28 00:04:56.999000 audit[5010]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd7459d0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:57.009630 kernel: audit: type=1300 audit(1769558696.999:792): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffd7459d0 a2=3 a3=0 items=0 ppid=1 pid=5010 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:56.999000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:57.011533 kernel: audit: type=1327 audit(1769558696.999:792): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:57.014845 systemd-logind[1561]: New session 14 of user core. Jan 28 00:04:57.024695 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 00:04:57.027000 audit[5010]: USER_START pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.032639 kernel: audit: type=1105 audit(1769558697.027:793): pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.031000 audit[5014]: CRED_ACQ pid=5014 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.035622 kernel: audit: type=1103 audit(1769558697.031:794): pid=5014 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.418934 sshd[5014]: Connection closed by 20.161.92.111 port 48838 Jan 28 00:04:57.421984 sshd-session[5010]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:57.424000 audit[5010]: USER_END pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.430985 systemd[1]: sshd@13-159.69.123.112:22-20.161.92.111:48838.service: Deactivated successfully. Jan 28 00:04:57.434822 kernel: audit: type=1106 audit(1769558697.424:795): pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.434943 kernel: audit: type=1104 audit(1769558697.424:796): pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.424000 audit[5010]: CRED_DISP pid=5010 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:57.436293 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 00:04:57.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-159.69.123.112:22-20.161.92.111:48838 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:57.442860 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Jan 28 00:04:57.447118 systemd-logind[1561]: Removed session 14. Jan 28 00:04:57.529385 systemd[1]: Started sshd@14-159.69.123.112:22-20.161.92.111:48850.service - OpenSSH per-connection server daemon (20.161.92.111:48850). Jan 28 00:04:57.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-159.69.123.112:22-20.161.92.111:48850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:58.070000 audit[5026]: USER_ACCT pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:58.072569 sshd[5026]: Accepted publickey for core from 20.161.92.111 port 48850 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:58.073000 audit[5026]: CRED_ACQ pid=5026 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:58.073000 audit[5026]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd3ad3000 a2=3 a3=0 items=0 ppid=1 pid=5026 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:58.073000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:58.076376 sshd-session[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:58.091030 systemd-logind[1561]: New session 15 of user core. Jan 28 00:04:58.095955 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 00:04:58.100000 audit[5026]: USER_START pid=5026 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:58.103000 audit[5030]: CRED_ACQ pid=5030 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:58.616505 sshd[5030]: Connection closed by 20.161.92.111 port 48850 Jan 28 00:04:58.617709 sshd-session[5026]: pam_unix(sshd:session): session closed for user core Jan 28 00:04:58.620000 audit[5026]: USER_END pid=5026 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:58.620000 audit[5026]: CRED_DISP pid=5026 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:58.625702 systemd[1]: sshd@14-159.69.123.112:22-20.161.92.111:48850.service: Deactivated successfully. Jan 28 00:04:58.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-159.69.123.112:22-20.161.92.111:48850 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:58.630891 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 00:04:58.632787 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Jan 28 00:04:58.636299 systemd-logind[1561]: Removed session 15. Jan 28 00:04:58.723867 systemd[1]: Started sshd@15-159.69.123.112:22-20.161.92.111:48862.service - OpenSSH per-connection server daemon (20.161.92.111:48862). Jan 28 00:04:58.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-159.69.123.112:22-20.161.92.111:48862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:04:59.263000 audit[5040]: USER_ACCT pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:59.265368 sshd[5040]: Accepted publickey for core from 20.161.92.111 port 48862 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:04:59.264000 audit[5040]: CRED_ACQ pid=5040 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:59.264000 audit[5040]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc9ab0060 a2=3 a3=0 items=0 ppid=1 pid=5040 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:04:59.264000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:04:59.268487 sshd-session[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:04:59.274745 systemd-logind[1561]: New session 16 of user core. Jan 28 00:04:59.283050 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 00:04:59.288000 audit[5040]: USER_START pid=5040 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:04:59.290000 audit[5044]: CRED_ACQ pid=5044 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:00.122944 kubelet[2747]: E0128 00:05:00.122867 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:05:00.370000 audit[5056]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:05:00.370000 audit[5056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd737ad20 a2=0 a3=1 items=0 ppid=2849 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:00.370000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:05:00.374000 audit[5056]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:05:00.374000 audit[5056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd737ad20 a2=0 a3=1 items=0 ppid=2849 pid=5056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:00.374000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:05:00.393000 audit[5058]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:05:00.393000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd4380b10 a2=0 a3=1 items=0 ppid=2849 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:00.393000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:05:00.410000 audit[5058]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5058 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:05:00.410000 audit[5058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd4380b10 a2=0 a3=1 items=0 ppid=2849 pid=5058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:00.410000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:05:00.477167 sshd[5044]: Connection closed by 20.161.92.111 port 48862 Jan 28 00:05:00.477052 sshd-session[5040]: pam_unix(sshd:session): session closed for user core Jan 28 00:05:00.483000 audit[5040]: USER_END pid=5040 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:00.483000 audit[5040]: CRED_DISP pid=5040 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:00.492780 systemd[1]: sshd@15-159.69.123.112:22-20.161.92.111:48862.service: Deactivated successfully. Jan 28 00:05:00.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-159.69.123.112:22-20.161.92.111:48862 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:00.497967 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 00:05:00.501417 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Jan 28 00:05:00.506044 systemd-logind[1561]: Removed session 16. Jan 28 00:05:00.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-159.69.123.112:22-20.161.92.111:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:00.585993 systemd[1]: Started sshd@16-159.69.123.112:22-20.161.92.111:48868.service - OpenSSH per-connection server daemon (20.161.92.111:48868). Jan 28 00:05:01.119000 audit[5063]: USER_ACCT pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.121815 sshd[5063]: Accepted publickey for core from 20.161.92.111 port 48868 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:05:01.122000 audit[5063]: CRED_ACQ pid=5063 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.123000 audit[5063]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd1417a50 a2=3 a3=0 items=0 ppid=1 pid=5063 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:01.123000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:01.126059 sshd-session[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:05:01.137694 systemd-logind[1561]: New session 17 of user core. Jan 28 00:05:01.141960 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 00:05:01.145000 audit[5063]: USER_START pid=5063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.147000 audit[5067]: CRED_ACQ pid=5067 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.661332 sshd[5067]: Connection closed by 20.161.92.111 port 48868 Jan 28 00:05:01.661853 sshd-session[5063]: pam_unix(sshd:session): session closed for user core Jan 28 00:05:01.662000 audit[5063]: USER_END pid=5063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.669865 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 28 00:05:01.670056 kernel: audit: type=1106 audit(1769558701.662:826): pid=5063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.663000 audit[5063]: CRED_DISP pid=5063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.673708 kernel: audit: type=1104 audit(1769558701.663:827): pid=5063 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:01.674988 systemd[1]: sshd@16-159.69.123.112:22-20.161.92.111:48868.service: Deactivated successfully. Jan 28 00:05:01.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-159.69.123.112:22-20.161.92.111:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:01.679627 kernel: audit: type=1131 audit(1769558701.673:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-159.69.123.112:22-20.161.92.111:48868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:01.681415 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 00:05:01.684758 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Jan 28 00:05:01.691400 systemd-logind[1561]: Removed session 17. Jan 28 00:05:01.769190 systemd[1]: Started sshd@17-159.69.123.112:22-20.161.92.111:48882.service - OpenSSH per-connection server daemon (20.161.92.111:48882). Jan 28 00:05:01.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-159.69.123.112:22-20.161.92.111:48882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:01.776672 kernel: audit: type=1130 audit(1769558701.767:829): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-159.69.123.112:22-20.161.92.111:48882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:02.110358 kubelet[2747]: E0128 00:05:02.109913 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:05:02.308000 audit[5078]: USER_ACCT pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.313835 kernel: audit: type=1101 audit(1769558702.308:830): pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.313969 sshd[5078]: Accepted publickey for core from 20.161.92.111 port 48882 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:05:02.312000 audit[5078]: CRED_ACQ pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.317577 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:05:02.320185 kernel: audit: type=1103 audit(1769558702.312:831): pid=5078 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.320297 kernel: audit: type=1006 audit(1769558702.315:832): pid=5078 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jan 28 00:05:02.315000 audit[5078]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdcde55c0 a2=3 a3=0 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:02.324388 kernel: audit: type=1300 audit(1769558702.315:832): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdcde55c0 a2=3 a3=0 items=0 ppid=1 pid=5078 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:02.315000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:02.326688 kernel: audit: type=1327 audit(1769558702.315:832): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:02.329910 systemd-logind[1561]: New session 18 of user core. Jan 28 00:05:02.334890 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 00:05:02.338000 audit[5078]: USER_START pid=5078 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.340000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.345676 kernel: audit: type=1105 audit(1769558702.338:833): pid=5078 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.696623 sshd[5082]: Connection closed by 20.161.92.111 port 48882 Jan 28 00:05:02.697945 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Jan 28 00:05:02.699000 audit[5078]: USER_END pid=5078 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.699000 audit[5078]: CRED_DISP pid=5078 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:02.707226 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Jan 28 00:05:02.708177 systemd[1]: sshd@17-159.69.123.112:22-20.161.92.111:48882.service: Deactivated successfully. Jan 28 00:05:02.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-159.69.123.112:22-20.161.92.111:48882 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:02.714090 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 00:05:02.721355 systemd-logind[1561]: Removed session 18. Jan 28 00:05:04.111931 kubelet[2747]: E0128 00:05:04.109728 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:05:04.111931 kubelet[2747]: E0128 00:05:04.109838 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:05:05.110269 kubelet[2747]: E0128 00:05:05.110208 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:05:05.110540 kubelet[2747]: E0128 00:05:05.110327 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:05:05.594000 audit[5094]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:05:05.594000 audit[5094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5592bd0 a2=0 a3=1 items=0 ppid=2849 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:05.594000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:05:05.601000 audit[5094]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5094 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 28 00:05:05.601000 audit[5094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffff5592bd0 a2=0 a3=1 items=0 ppid=2849 pid=5094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:05.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 28 00:05:07.805956 systemd[1]: Started sshd@18-159.69.123.112:22-20.161.92.111:60724.service - OpenSSH per-connection server daemon (20.161.92.111:60724). Jan 28 00:05:07.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-159.69.123.112:22-20.161.92.111:60724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:07.807237 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 28 00:05:07.807311 kernel: audit: type=1130 audit(1769558707.805:840): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-159.69.123.112:22-20.161.92.111:60724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:08.347000 audit[5099]: USER_ACCT pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.352661 kernel: audit: type=1101 audit(1769558708.347:841): pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.353318 sshd[5099]: Accepted publickey for core from 20.161.92.111 port 60724 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:05:08.352000 audit[5099]: CRED_ACQ pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.355223 sshd-session[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:05:08.359892 kernel: audit: type=1103 audit(1769558708.352:842): pid=5099 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.359979 kernel: audit: type=1006 audit(1769558708.352:843): pid=5099 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jan 28 00:05:08.352000 audit[5099]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7d9f730 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:08.362783 kernel: audit: type=1300 audit(1769558708.352:843): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd7d9f730 a2=3 a3=0 items=0 ppid=1 pid=5099 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:08.352000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:08.363934 kernel: audit: type=1327 audit(1769558708.352:843): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:08.364679 systemd-logind[1561]: New session 19 of user core. Jan 28 00:05:08.371887 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 00:05:08.376000 audit[5099]: USER_START pid=5099 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.382000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.385241 kernel: audit: type=1105 audit(1769558708.376:844): pid=5099 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.385323 kernel: audit: type=1103 audit(1769558708.382:845): pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.736086 sshd[5103]: Connection closed by 20.161.92.111 port 60724 Jan 28 00:05:08.737039 sshd-session[5099]: pam_unix(sshd:session): session closed for user core Jan 28 00:05:08.742000 audit[5099]: USER_END pid=5099 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.742000 audit[5099]: CRED_DISP pid=5099 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.752506 kernel: audit: type=1106 audit(1769558708.742:846): pid=5099 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.752630 kernel: audit: type=1104 audit(1769558708.742:847): pid=5099 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:08.754056 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Jan 28 00:05:08.755180 systemd[1]: sshd@18-159.69.123.112:22-20.161.92.111:60724.service: Deactivated successfully. Jan 28 00:05:08.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-159.69.123.112:22-20.161.92.111:60724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:08.763837 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 00:05:08.766455 systemd-logind[1561]: Removed session 19. Jan 28 00:05:11.108655 kubelet[2747]: E0128 00:05:11.107919 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:05:13.844824 systemd[1]: Started sshd@19-159.69.123.112:22-20.161.92.111:46936.service - OpenSSH per-connection server daemon (20.161.92.111:46936). Jan 28 00:05:13.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-159.69.123.112:22-20.161.92.111:46936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:13.848209 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:05:13.848348 kernel: audit: type=1130 audit(1769558713.844:849): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-159.69.123.112:22-20.161.92.111:46936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:14.109433 kubelet[2747]: E0128 00:05:14.109264 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:05:14.401000 audit[5114]: USER_ACCT pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.402758 sshd[5114]: Accepted publickey for core from 20.161.92.111 port 46936 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:05:14.406458 sshd-session[5114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:05:14.404000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.409505 kernel: audit: type=1101 audit(1769558714.401:850): pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.409651 kernel: audit: type=1103 audit(1769558714.404:851): pid=5114 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.413059 kernel: audit: type=1006 audit(1769558714.405:852): pid=5114 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 28 00:05:14.405000 audit[5114]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb408060 a2=3 a3=0 items=0 ppid=1 pid=5114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:14.416316 kernel: audit: type=1300 audit(1769558714.405:852): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdb408060 a2=3 a3=0 items=0 ppid=1 pid=5114 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:14.405000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:14.417638 kernel: audit: type=1327 audit(1769558714.405:852): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:14.421869 systemd-logind[1561]: New session 20 of user core. Jan 28 00:05:14.428372 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 00:05:14.432000 audit[5114]: USER_START pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.443905 kernel: audit: type=1105 audit(1769558714.432:853): pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.444035 kernel: audit: type=1103 audit(1769558714.440:854): pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.440000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.795647 sshd[5118]: Connection closed by 20.161.92.111 port 46936 Jan 28 00:05:14.796871 sshd-session[5114]: pam_unix(sshd:session): session closed for user core Jan 28 00:05:14.798000 audit[5114]: USER_END pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.798000 audit[5114]: CRED_DISP pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.805554 kernel: audit: type=1106 audit(1769558714.798:855): pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.805732 kernel: audit: type=1104 audit(1769558714.798:856): pid=5114 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:14.804483 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Jan 28 00:05:14.806452 systemd[1]: sshd@19-159.69.123.112:22-20.161.92.111:46936.service: Deactivated successfully. Jan 28 00:05:14.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-159.69.123.112:22-20.161.92.111:46936 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:14.810399 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 00:05:14.814497 systemd-logind[1561]: Removed session 20. Jan 28 00:05:17.108484 kubelet[2747]: E0128 00:05:17.108097 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:05:17.109619 kubelet[2747]: E0128 00:05:17.109437 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:05:18.113806 kubelet[2747]: E0128 00:05:18.113748 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:05:19.110314 kubelet[2747]: E0128 00:05:19.110244 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:05:19.908966 systemd[1]: Started sshd@20-159.69.123.112:22-20.161.92.111:46948.service - OpenSSH per-connection server daemon (20.161.92.111:46948). Jan 28 00:05:19.913090 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:05:19.913198 kernel: audit: type=1130 audit(1769558719.908:858): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-159.69.123.112:22-20.161.92.111:46948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:19.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-159.69.123.112:22-20.161.92.111:46948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:20.450000 audit[5130]: USER_ACCT pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.454618 sshd[5130]: Accepted publickey for core from 20.161.92.111 port 46948 ssh2: RSA SHA256:Z7gvsNnC87g5U4jgzcxzTKJliRtP6met8IXSXUPDzv0 Jan 28 00:05:20.457079 kernel: audit: type=1101 audit(1769558720.450:859): pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.457173 kernel: audit: type=1103 audit(1769558720.454:860): pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.454000 audit[5130]: CRED_ACQ pid=5130 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.455852 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:05:20.460160 kernel: audit: type=1006 audit(1769558720.454:861): pid=5130 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 28 00:05:20.454000 audit[5130]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccb17ce0 a2=3 a3=0 items=0 ppid=1 pid=5130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:20.463460 kernel: audit: type=1300 audit(1769558720.454:861): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccb17ce0 a2=3 a3=0 items=0 ppid=1 pid=5130 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:20.454000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:20.464746 kernel: audit: type=1327 audit(1769558720.454:861): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 28 00:05:20.470140 systemd-logind[1561]: New session 21 of user core. Jan 28 00:05:20.476871 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 00:05:20.485000 audit[5130]: USER_START pid=5130 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.493941 kernel: audit: type=1105 audit(1769558720.485:862): pid=5130 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.494052 kernel: audit: type=1103 audit(1769558720.490:863): pid=5159 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.490000 audit[5159]: CRED_ACQ pid=5159 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.840389 sshd[5159]: Connection closed by 20.161.92.111 port 46948 Jan 28 00:05:20.842828 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Jan 28 00:05:20.843000 audit[5130]: USER_END pid=5130 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.847912 systemd[1]: sshd@20-159.69.123.112:22-20.161.92.111:46948.service: Deactivated successfully. Jan 28 00:05:20.843000 audit[5130]: CRED_DISP pid=5130 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.851761 kernel: audit: type=1106 audit(1769558720.843:864): pid=5130 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.851846 kernel: audit: type=1104 audit(1769558720.843:865): pid=5130 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 28 00:05:20.851111 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 00:05:20.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-159.69.123.112:22-20.161.92.111:46948 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 28 00:05:20.860484 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Jan 28 00:05:20.863027 systemd-logind[1561]: Removed session 21. Jan 28 00:05:22.108766 kubelet[2747]: E0128 00:05:22.108703 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:05:29.111223 kubelet[2747]: E0128 00:05:29.108890 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:05:29.111223 kubelet[2747]: E0128 00:05:29.109495 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:05:30.114250 kubelet[2747]: E0128 00:05:30.113752 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w6mgn" podUID="2955546a-cb98-4307-9f9a-44877b3e7017" Jan 28 00:05:31.108022 kubelet[2747]: E0128 00:05:31.107638 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-5lsmb" podUID="137246fc-f131-4552-a311-34e5752765be" Jan 28 00:05:31.109891 kubelet[2747]: E0128 00:05:31.109759 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-558b9cfcb8-hltp7" podUID="28c2487d-b7dc-46cd-9e63-ec0ab10a8703" Jan 28 00:05:34.109339 kubelet[2747]: E0128 00:05:34.109191 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-h6h5c" podUID="bec7677b-1e96-4176-929f-6c7596f75411" Jan 28 00:05:35.369133 systemd[1]: cri-containerd-09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444.scope: Deactivated successfully. Jan 28 00:05:35.370776 systemd[1]: cri-containerd-09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444.scope: Consumed 4.970s CPU time, 63.2M memory peak, 2.7M read from disk. Jan 28 00:05:35.371000 audit: BPF prog-id=256 op=LOAD Jan 28 00:05:35.373906 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 28 00:05:35.374001 kernel: audit: type=1334 audit(1769558735.371:867): prog-id=256 op=LOAD Jan 28 00:05:35.374039 kernel: audit: type=1334 audit(1769558735.371:868): prog-id=83 op=UNLOAD Jan 28 00:05:35.371000 audit: BPF prog-id=83 op=UNLOAD Jan 28 00:05:35.374554 containerd[1582]: time="2026-01-28T00:05:35.374498940Z" level=info msg="received container exit event container_id:\"09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444\" id:\"09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444\" pid:2582 exit_status:1 exited_at:{seconds:1769558735 nanos:373786617}" Jan 28 00:05:35.375000 audit: BPF prog-id=98 op=UNLOAD Jan 28 00:05:35.375000 audit: BPF prog-id=102 op=UNLOAD Jan 28 00:05:35.378059 kernel: audit: type=1334 audit(1769558735.375:869): prog-id=98 op=UNLOAD Jan 28 00:05:35.378108 kernel: audit: type=1334 audit(1769558735.375:870): prog-id=102 op=UNLOAD Jan 28 00:05:35.407306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444-rootfs.mount: Deactivated successfully. Jan 28 00:05:35.806482 kubelet[2747]: E0128 00:05:35.806229 2747 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38284->10.0.0.2:2379: read: connection timed out" Jan 28 00:05:35.951961 kubelet[2747]: I0128 00:05:35.951909 2747 scope.go:117] "RemoveContainer" containerID="09da534380881b8ed3f9d2d4d8d462bf7809f7801ca9f5397634dc55e9977444" Jan 28 00:05:35.968554 containerd[1582]: time="2026-01-28T00:05:35.968467266Z" level=info msg="CreateContainer within sandbox \"08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 28 00:05:35.979877 containerd[1582]: time="2026-01-28T00:05:35.979816318Z" level=info msg="Container 9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:05:35.996931 containerd[1582]: time="2026-01-28T00:05:35.996841676Z" level=info msg="CreateContainer within sandbox \"08285973536109550328241e06f3371b4982e13445dac251e52fc31011c4370f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760\"" Jan 28 00:05:36.005907 containerd[1582]: time="2026-01-28T00:05:36.005854279Z" level=info msg="StartContainer for \"9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760\"" Jan 28 00:05:36.008230 containerd[1582]: time="2026-01-28T00:05:36.007879249Z" level=info msg="connecting to shim 9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760" address="unix:///run/containerd/s/01df3c33092bf2b4dab0baf4da040075e26886324426359c870de682bcd7bba2" protocol=ttrpc version=3 Jan 28 00:05:36.039940 systemd[1]: Started cri-containerd-9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760.scope - libcontainer container 9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760. Jan 28 00:05:36.059000 audit: BPF prog-id=257 op=LOAD Jan 28 00:05:36.062657 kernel: audit: type=1334 audit(1769558736.059:871): prog-id=257 op=LOAD Jan 28 00:05:36.061000 audit: BPF prog-id=258 op=LOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.066571 kernel: audit: type=1334 audit(1769558736.061:872): prog-id=258 op=LOAD Jan 28 00:05:36.066711 kernel: audit: type=1300 audit(1769558736.061:872): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.066737 kernel: audit: type=1327 audit(1769558736.061:872): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.068949 kernel: audit: type=1334 audit(1769558736.061:873): prog-id=258 op=UNLOAD Jan 28 00:05:36.069068 kernel: audit: type=1300 audit(1769558736.061:873): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: BPF prog-id=258 op=UNLOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.061000 audit: BPF prog-id=259 op=LOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.061000 audit: BPF prog-id=260 op=LOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.061000 audit: BPF prog-id=260 op=UNLOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.061000 audit: BPF prog-id=259 op=UNLOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.061000 audit: BPF prog-id=261 op=LOAD Jan 28 00:05:36.061000 audit[5187]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2429 pid=5187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:36.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3963613439346131663565333639313534343736636566353763336265 Jan 28 00:05:36.115532 containerd[1582]: time="2026-01-28T00:05:36.115142896Z" level=info msg="StartContainer for \"9ca494a1f5e369154476cef57c3be1e3bafef70239d7d244aad38b078b71e760\" returns successfully" Jan 28 00:05:37.055654 systemd[1]: cri-containerd-59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7.scope: Deactivated successfully. Jan 28 00:05:37.056023 systemd[1]: cri-containerd-59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7.scope: Consumed 40.236s CPU time, 124.1M memory peak. Jan 28 00:05:37.058000 audit: BPF prog-id=146 op=UNLOAD Jan 28 00:05:37.058000 audit: BPF prog-id=150 op=UNLOAD Jan 28 00:05:37.063028 containerd[1582]: time="2026-01-28T00:05:37.062980737Z" level=info msg="received container exit event container_id:\"59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7\" id:\"59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7\" pid:3079 exit_status:1 exited_at:{seconds:1769558737 nanos:61651051}" Jan 28 00:05:37.095526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7-rootfs.mount: Deactivated successfully. Jan 28 00:05:37.968028 kubelet[2747]: I0128 00:05:37.967994 2747 scope.go:117] "RemoveContainer" containerID="59f7da123f6f792eb30880c675760dc161228b26987270b3b2656a99aec197e7" Jan 28 00:05:37.971007 containerd[1582]: time="2026-01-28T00:05:37.970970456Z" level=info msg="CreateContainer within sandbox \"72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 28 00:05:37.983789 containerd[1582]: time="2026-01-28T00:05:37.983745283Z" level=info msg="Container d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34: CDI devices from CRI Config.CDIDevices: []" Jan 28 00:05:37.995040 containerd[1582]: time="2026-01-28T00:05:37.994993702Z" level=info msg="CreateContainer within sandbox \"72d7beebfd4646f23878dd7786ed031828e2c8a51769b2f9c98a8c145e268304\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34\"" Jan 28 00:05:37.995788 containerd[1582]: time="2026-01-28T00:05:37.995761826Z" level=info msg="StartContainer for \"d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34\"" Jan 28 00:05:37.996920 containerd[1582]: time="2026-01-28T00:05:37.996889592Z" level=info msg="connecting to shim d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34" address="unix:///run/containerd/s/f600fcd0e4e11649f2ecbd4fa8c9e0030cae7ef039868c16f0226aad24319a60" protocol=ttrpc version=3 Jan 28 00:05:38.027060 systemd[1]: Started cri-containerd-d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34.scope - libcontainer container d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34. Jan 28 00:05:38.054000 audit: BPF prog-id=262 op=LOAD Jan 28 00:05:38.055000 audit: BPF prog-id=263 op=LOAD Jan 28 00:05:38.055000 audit[5229]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.055000 audit: BPF prog-id=263 op=UNLOAD Jan 28 00:05:38.055000 audit[5229]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.055000 audit: BPF prog-id=264 op=LOAD Jan 28 00:05:38.055000 audit[5229]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.055000 audit: BPF prog-id=265 op=LOAD Jan 28 00:05:38.055000 audit[5229]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.055000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.056000 audit: BPF prog-id=265 op=UNLOAD Jan 28 00:05:38.056000 audit[5229]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.056000 audit: BPF prog-id=264 op=UNLOAD Jan 28 00:05:38.056000 audit[5229]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.056000 audit: BPF prog-id=266 op=LOAD Jan 28 00:05:38.056000 audit[5229]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2874 pid=5229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 28 00:05:38.056000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435663264343836336336386663373663636565323033613634363935 Jan 28 00:05:38.080931 containerd[1582]: time="2026-01-28T00:05:38.080885778Z" level=info msg="StartContainer for \"d5f2d4863c68fc76ccee203a64695539df0d8d0a2ec425c251353d8e32691f34\" returns successfully" Jan 28 00:05:39.589839 kubelet[2747]: E0128 00:05:39.581451 2747 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:38070->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-697c7bd8db-7vcbz.188ebc2707ba5afc calico-apiserver 1706 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-697c7bd8db-7vcbz,UID:07cd90a0-de7e-4c03-9b09-e4adb1ab3e71,APIVersion:v1,ResourceVersion:847,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4593-0-0-n-20383d5ef7,},FirstTimestamp:2026-01-28 00:02:59 +0000 UTC,LastTimestamp:2026-01-28 00:05:29.108823363 +0000 UTC m=+209.135706282,Count:9,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4593-0-0-n-20383d5ef7,}" Jan 28 00:05:41.107579 kubelet[2747]: E0128 00:05:41.107443 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74bd894b84-mmqcv" podUID="1d007c43-f99e-43db-8dd1-f5d56f04a788" Jan 28 00:05:41.108549 kubelet[2747]: E0128 00:05:41.108389 2747 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-697c7bd8db-7vcbz" podUID="07cd90a0-de7e-4c03-9b09-e4adb1ab3e71" Jan 28 00:05:41.327752 systemd[1]: cri-containerd-07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598.scope: Deactivated successfully. Jan 28 00:05:41.328554 systemd[1]: cri-containerd-07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598.scope: Consumed 4.067s CPU time, 25.1M memory peak, 3.3M read from disk. Jan 28 00:05:41.330000 audit: BPF prog-id=267 op=LOAD Jan 28 00:05:41.332159 kernel: kauditd_printk_skb: 40 callbacks suppressed Jan 28 00:05:41.332382 kernel: audit: type=1334 audit(1769558741.330:889): prog-id=267 op=LOAD Jan 28 00:05:41.332920 kernel: audit: type=1334 audit(1769558741.331:890): prog-id=88 op=UNLOAD Jan 28 00:05:41.331000 audit: BPF prog-id=88 op=UNLOAD Jan 28 00:05:41.332000 audit: BPF prog-id=103 op=UNLOAD Jan 28 00:05:41.332000 audit: BPF prog-id=107 op=UNLOAD Jan 28 00:05:41.334872 kernel: audit: type=1334 audit(1769558741.332:891): prog-id=103 op=UNLOAD Jan 28 00:05:41.334959 kernel: audit: type=1334 audit(1769558741.332:892): prog-id=107 op=UNLOAD Jan 28 00:05:41.336037 containerd[1582]: time="2026-01-28T00:05:41.335997199Z" level=info msg="received container exit event container_id:\"07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598\" id:\"07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598\" pid:2589 exit_status:1 exited_at:{seconds:1769558741 nanos:333881066}" Jan 28 00:05:41.375501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-07d5136e7985125023e98326b66393c10c2b915587250879d2267e1bdb0b2598-rootfs.mount: Deactivated successfully.