Jan 20 23:54:20.451768 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 20 23:54:20.451792 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 20 22:19:20 -00 2026 Jan 20 23:54:20.451803 kernel: KASLR enabled Jan 20 23:54:20.451809 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 20 23:54:20.451815 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 20 23:54:20.451821 kernel: random: crng init done Jan 20 23:54:20.451828 kernel: secureboot: Secure boot disabled Jan 20 23:54:20.451834 kernel: ACPI: Early table checksum verification disabled Jan 20 23:54:20.451840 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 20 23:54:20.451848 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 20 23:54:20.451854 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451861 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451867 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451873 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451882 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451889 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451895 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451902 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451908 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:54:20.451915 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 23:54:20.451921 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 20 23:54:20.451928 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 23:54:20.451934 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 20 23:54:20.451942 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 20 23:54:20.451949 kernel: Zone ranges: Jan 20 23:54:20.451956 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 20 23:54:20.451962 kernel: DMA32 empty Jan 20 23:54:20.451969 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 20 23:54:20.451975 kernel: Device empty Jan 20 23:54:20.451981 kernel: Movable zone start for each node Jan 20 23:54:20.451988 kernel: Early memory node ranges Jan 20 23:54:20.451995 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 20 23:54:20.452001 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 20 23:54:20.452008 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 20 23:54:20.452014 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 20 23:54:20.452022 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 20 23:54:20.452029 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 20 23:54:20.452035 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 20 23:54:20.452042 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 20 23:54:20.452048 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 20 23:54:20.452057 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 20 23:54:20.452066 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 20 23:54:20.452072 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 20 23:54:20.452124 kernel: psci: probing for conduit method from ACPI. Jan 20 23:54:20.452132 kernel: psci: PSCIv1.1 detected in firmware. Jan 20 23:54:20.452139 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 23:54:20.452146 kernel: psci: Trusted OS migration not required Jan 20 23:54:20.452153 kernel: psci: SMC Calling Convention v1.1 Jan 20 23:54:20.452160 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 20 23:54:20.452169 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 23:54:20.452176 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 23:54:20.452183 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 20 23:54:20.452190 kernel: Detected PIPT I-cache on CPU0 Jan 20 23:54:20.452196 kernel: CPU features: detected: GIC system register CPU interface Jan 20 23:54:20.452203 kernel: CPU features: detected: Spectre-v4 Jan 20 23:54:20.452210 kernel: CPU features: detected: Spectre-BHB Jan 20 23:54:20.452217 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 23:54:20.452224 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 23:54:20.452230 kernel: CPU features: detected: ARM erratum 1418040 Jan 20 23:54:20.452237 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 23:54:20.452245 kernel: alternatives: applying boot alternatives Jan 20 23:54:20.452253 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3c423a3ed4865abab898483a94535823dbc3dcf7b9fc4db9a9e44dcb3b3370eb Jan 20 23:54:20.452261 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 20 23:54:20.452268 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 23:54:20.452274 kernel: Fallback order for Node 0: 0 Jan 20 23:54:20.452281 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 20 23:54:20.452288 kernel: Policy zone: Normal Jan 20 23:54:20.452295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 23:54:20.452301 kernel: software IO TLB: area num 2. Jan 20 23:54:20.452308 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 20 23:54:20.452316 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 20 23:54:20.452323 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 23:54:20.452331 kernel: rcu: RCU event tracing is enabled. Jan 20 23:54:20.452339 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 20 23:54:20.452346 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 23:54:20.452352 kernel: Tracing variant of Tasks RCU enabled. Jan 20 23:54:20.452359 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 23:54:20.452366 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 20 23:54:20.452373 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 23:54:20.452380 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 20 23:54:20.452387 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 23:54:20.452396 kernel: GICv3: 256 SPIs implemented Jan 20 23:54:20.452403 kernel: GICv3: 0 Extended SPIs implemented Jan 20 23:54:20.452410 kernel: Root IRQ handler: gic_handle_irq Jan 20 23:54:20.452417 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 20 23:54:20.452424 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 20 23:54:20.452434 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 20 23:54:20.452444 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 20 23:54:20.452453 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 20 23:54:20.452464 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 20 23:54:20.452473 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 20 23:54:20.452480 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 20 23:54:20.452489 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 23:54:20.452497 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 23:54:20.452504 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 20 23:54:20.452511 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 20 23:54:20.452517 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 20 23:54:20.452524 kernel: Console: colour dummy device 80x25 Jan 20 23:54:20.452560 kernel: ACPI: Core revision 20240827 Jan 20 23:54:20.452570 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 20 23:54:20.452578 kernel: pid_max: default: 32768 minimum: 301 Jan 20 23:54:20.452587 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 23:54:20.452595 kernel: landlock: Up and running. Jan 20 23:54:20.452602 kernel: SELinux: Initializing. Jan 20 23:54:20.452609 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 23:54:20.452617 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 20 23:54:20.452624 kernel: rcu: Hierarchical SRCU implementation. Jan 20 23:54:20.452632 kernel: rcu: Max phase no-delay instances is 400. Jan 20 23:54:20.452639 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 23:54:20.452648 kernel: Remapping and enabling EFI services. Jan 20 23:54:20.452655 kernel: smp: Bringing up secondary CPUs ... Jan 20 23:54:20.452662 kernel: Detected PIPT I-cache on CPU1 Jan 20 23:54:20.452669 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 20 23:54:20.452676 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 20 23:54:20.452683 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 23:54:20.452690 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 20 23:54:20.452699 kernel: smp: Brought up 1 node, 2 CPUs Jan 20 23:54:20.452706 kernel: SMP: Total of 2 processors activated. Jan 20 23:54:20.452718 kernel: CPU: All CPU(s) started at EL1 Jan 20 23:54:20.452726 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 23:54:20.452734 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 23:54:20.452741 kernel: CPU features: detected: Common not Private translations Jan 20 23:54:20.452749 kernel: CPU features: detected: CRC32 instructions Jan 20 23:54:20.452756 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 20 23:54:20.452765 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 23:54:20.452772 kernel: CPU features: detected: LSE atomic instructions Jan 20 23:54:20.452780 kernel: CPU features: detected: Privileged Access Never Jan 20 23:54:20.452787 kernel: CPU features: detected: RAS Extension Support Jan 20 23:54:20.452794 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 20 23:54:20.452802 kernel: alternatives: applying system-wide alternatives Jan 20 23:54:20.452811 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 20 23:54:20.452819 kernel: Memory: 3885924K/4096000K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 188596K reserved, 16384K cma-reserved) Jan 20 23:54:20.452827 kernel: devtmpfs: initialized Jan 20 23:54:20.452834 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 23:54:20.452842 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 20 23:54:20.452850 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 23:54:20.452858 kernel: 0 pages in range for non-PLT usage Jan 20 23:54:20.452867 kernel: 515168 pages in range for PLT usage Jan 20 23:54:20.452874 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 23:54:20.452882 kernel: SMBIOS 3.0.0 present. Jan 20 23:54:20.452889 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 20 23:54:20.452897 kernel: DMI: Memory slots populated: 1/1 Jan 20 23:54:20.452904 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 23:54:20.452912 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 20 23:54:20.452921 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 23:54:20.452929 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 23:54:20.452937 kernel: audit: initializing netlink subsys (disabled) Jan 20 23:54:20.452945 kernel: audit: type=2000 audit(0.011:1): state=initialized audit_enabled=0 res=1 Jan 20 23:54:20.452952 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 23:54:20.452960 kernel: cpuidle: using governor menu Jan 20 23:54:20.452967 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 23:54:20.452976 kernel: ASID allocator initialised with 32768 entries Jan 20 23:54:20.452984 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 23:54:20.452991 kernel: Serial: AMBA PL011 UART driver Jan 20 23:54:20.452999 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 23:54:20.453006 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 23:54:20.453013 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 23:54:20.453021 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 23:54:20.453029 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 23:54:20.453038 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 23:54:20.453046 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 23:54:20.453053 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 23:54:20.453061 kernel: ACPI: Added _OSI(Module Device) Jan 20 23:54:20.453068 kernel: ACPI: Added _OSI(Processor Device) Jan 20 23:54:20.453083 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 23:54:20.453093 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 23:54:20.453113 kernel: ACPI: Interpreter enabled Jan 20 23:54:20.453120 kernel: ACPI: Using GIC for interrupt routing Jan 20 23:54:20.453128 kernel: ACPI: MCFG table detected, 1 entries Jan 20 23:54:20.453135 kernel: ACPI: CPU0 has been hot-added Jan 20 23:54:20.453143 kernel: ACPI: CPU1 has been hot-added Jan 20 23:54:20.453150 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 20 23:54:20.453158 kernel: printk: legacy console [ttyAMA0] enabled Jan 20 23:54:20.453167 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 23:54:20.453334 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 23:54:20.453421 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 20 23:54:20.453501 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 20 23:54:20.453595 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 20 23:54:20.453676 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 20 23:54:20.453689 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 20 23:54:20.453697 kernel: PCI host bridge to bus 0000:00 Jan 20 23:54:20.453781 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 20 23:54:20.453854 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 20 23:54:20.453926 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 20 23:54:20.453997 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 23:54:20.454110 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 20 23:54:20.454207 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 20 23:54:20.454294 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 20 23:54:20.454374 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 20 23:54:20.454467 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.454585 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 20 23:54:20.454673 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 20 23:54:20.454753 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 20 23:54:20.454832 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 20 23:54:20.454920 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.454999 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 20 23:54:20.455124 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 20 23:54:20.455218 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 20 23:54:20.455309 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.455422 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 20 23:54:20.455505 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 20 23:54:20.455604 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 20 23:54:20.455685 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 20 23:54:20.455771 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.455850 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 20 23:54:20.455929 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 20 23:54:20.456007 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 20 23:54:20.456105 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 20 23:54:20.456203 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.456285 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 20 23:54:20.456363 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 20 23:54:20.456442 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 20 23:54:20.456520 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 20 23:54:20.456622 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.456704 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 20 23:54:20.456782 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 20 23:54:20.456861 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 20 23:54:20.456940 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 20 23:54:20.457026 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.457144 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 20 23:54:20.457230 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 20 23:54:20.457310 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 20 23:54:20.457388 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 20 23:54:20.457472 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.457585 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 20 23:54:20.457674 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 20 23:54:20.457753 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 20 23:54:20.457841 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:54:20.457924 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 20 23:54:20.458004 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 20 23:54:20.458107 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 20 23:54:20.458200 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 20 23:54:20.458280 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 20 23:54:20.458373 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 20 23:54:20.458455 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 20 23:54:20.458551 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 20 23:54:20.458642 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 20 23:54:20.458737 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 20 23:54:20.458819 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 20 23:54:20.458908 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 20 23:54:20.458989 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 20 23:54:20.459071 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 20 23:54:20.459190 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 20 23:54:20.459273 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 20 23:54:20.459383 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 20 23:54:20.459466 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 20 23:54:20.459561 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 20 23:54:20.459657 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 20 23:54:20.459738 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 20 23:54:20.459819 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 20 23:54:20.459907 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 20 23:54:20.459988 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 20 23:54:20.460071 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 20 23:54:20.460185 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 20 23:54:20.460269 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 20 23:54:20.460348 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 20 23:54:20.460426 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 20 23:54:20.460507 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 20 23:54:20.460632 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 20 23:54:20.460717 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 20 23:54:20.460800 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 20 23:54:20.460879 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 20 23:54:20.460958 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 20 23:54:20.461040 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 20 23:54:20.461144 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 20 23:54:20.461226 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 20 23:54:20.461308 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 20 23:54:20.461389 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 20 23:54:20.461468 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 20 23:54:20.461571 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 20 23:54:20.461659 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 20 23:54:20.461737 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 20 23:54:20.461817 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 20 23:54:20.461897 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 20 23:54:20.461974 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 20 23:54:20.462059 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 20 23:54:20.462151 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 20 23:54:20.462231 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 20 23:54:20.462313 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 20 23:54:20.462391 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 20 23:54:20.462472 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 20 23:54:20.462585 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 20 23:54:20.462669 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 20 23:54:20.462749 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 20 23:54:20.462828 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 20 23:54:20.462908 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 20 23:54:20.462987 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 20 23:54:20.463070 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 20 23:54:20.463165 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 20 23:54:20.463247 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 20 23:54:20.463326 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 20 23:54:20.463406 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 20 23:54:20.463487 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 20 23:54:20.463583 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 20 23:54:20.463667 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 20 23:54:20.463747 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 20 23:54:20.463826 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 20 23:54:20.463906 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 20 23:54:20.463984 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 20 23:54:20.464070 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 20 23:54:20.465768 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 20 23:54:20.465861 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 20 23:54:20.465945 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 20 23:54:20.466032 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 20 23:54:20.466162 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 20 23:54:20.466250 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 20 23:54:20.466330 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 20 23:54:20.466412 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 20 23:54:20.466491 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 20 23:54:20.466590 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 20 23:54:20.466672 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 20 23:54:20.466752 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 20 23:54:20.466834 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 20 23:54:20.466915 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 20 23:54:20.466993 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 20 23:54:20.467073 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 20 23:54:20.467174 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 20 23:54:20.467257 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 20 23:54:20.467335 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 20 23:54:20.467420 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 20 23:54:20.467506 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 20 23:54:20.467638 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 20 23:54:20.467729 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 20 23:54:20.467809 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 20 23:54:20.467890 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 20 23:54:20.467969 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 20 23:54:20.468049 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 20 23:54:20.470252 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 20 23:54:20.470357 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 20 23:54:20.470438 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 20 23:54:20.470518 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 20 23:54:20.470620 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 20 23:54:20.470711 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 20 23:54:20.470793 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 20 23:54:20.470876 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 20 23:54:20.470962 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 20 23:54:20.471043 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 20 23:54:20.471158 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 20 23:54:20.471247 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 20 23:54:20.471332 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 20 23:54:20.471412 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 20 23:54:20.471492 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 20 23:54:20.471586 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 20 23:54:20.471684 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 20 23:54:20.471780 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 20 23:54:20.471868 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 20 23:54:20.471948 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 20 23:54:20.472029 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 20 23:54:20.472165 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 20 23:54:20.472274 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 20 23:54:20.472366 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 20 23:54:20.472458 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 20 23:54:20.472588 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 20 23:54:20.472677 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 20 23:54:20.472760 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 20 23:54:20.472847 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 20 23:54:20.472940 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 20 23:54:20.473038 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 20 23:54:20.473155 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 20 23:54:20.473253 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 20 23:54:20.473349 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 20 23:54:20.473440 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 20 23:54:20.473548 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 20 23:54:20.473643 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 20 23:54:20.473736 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 20 23:54:20.473826 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 20 23:54:20.473926 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 20 23:54:20.474005 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 20 23:54:20.474106 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 20 23:54:20.474191 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 20 23:54:20.474271 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 20 23:54:20.474354 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 20 23:54:20.474433 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 20 23:54:20.474519 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 20 23:54:20.474611 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 20 23:54:20.474688 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 20 23:54:20.474780 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 20 23:54:20.474871 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 20 23:54:20.474946 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 20 23:54:20.475032 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 20 23:54:20.475123 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 20 23:54:20.475202 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 20 23:54:20.475286 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 20 23:54:20.475367 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 20 23:54:20.475446 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 20 23:54:20.475568 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 20 23:54:20.475652 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 20 23:54:20.475730 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 20 23:54:20.475816 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 20 23:54:20.475894 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 20 23:54:20.475968 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 20 23:54:20.476052 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 20 23:54:20.477316 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 20 23:54:20.477406 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 20 23:54:20.477509 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 20 23:54:20.477636 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 20 23:54:20.477716 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 20 23:54:20.477800 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 20 23:54:20.477875 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 20 23:54:20.477953 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 20 23:54:20.477964 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 20 23:54:20.477972 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 20 23:54:20.477980 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 20 23:54:20.477989 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 20 23:54:20.477997 kernel: iommu: Default domain type: Translated Jan 20 23:54:20.478005 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 23:54:20.478015 kernel: efivars: Registered efivars operations Jan 20 23:54:20.478023 kernel: vgaarb: loaded Jan 20 23:54:20.478032 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 23:54:20.478040 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 23:54:20.478048 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 23:54:20.478056 kernel: pnp: PnP ACPI init Jan 20 23:54:20.478173 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 20 23:54:20.478188 kernel: pnp: PnP ACPI: found 1 devices Jan 20 23:54:20.478196 kernel: NET: Registered PF_INET protocol family Jan 20 23:54:20.478205 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 20 23:54:20.478213 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 20 23:54:20.478221 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 23:54:20.478229 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 23:54:20.478237 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 20 23:54:20.478247 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 20 23:54:20.478255 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 23:54:20.478263 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 20 23:54:20.478271 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 23:54:20.478360 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 20 23:54:20.478371 kernel: PCI: CLS 0 bytes, default 64 Jan 20 23:54:20.478379 kernel: kvm [1]: HYP mode not available Jan 20 23:54:20.478389 kernel: Initialise system trusted keyrings Jan 20 23:54:20.478397 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 20 23:54:20.478405 kernel: Key type asymmetric registered Jan 20 23:54:20.478413 kernel: Asymmetric key parser 'x509' registered Jan 20 23:54:20.478421 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 23:54:20.478429 kernel: io scheduler mq-deadline registered Jan 20 23:54:20.478437 kernel: io scheduler kyber registered Jan 20 23:54:20.478446 kernel: io scheduler bfq registered Jan 20 23:54:20.478455 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 20 23:54:20.478550 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 20 23:54:20.478636 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 20 23:54:20.478715 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.478798 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 20 23:54:20.478878 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 20 23:54:20.478960 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.479042 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 20 23:54:20.479393 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 20 23:54:20.479485 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.479588 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 20 23:54:20.479674 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 20 23:54:20.479759 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.479842 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 20 23:54:20.479925 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 20 23:54:20.480005 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.480108 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 20 23:54:20.480195 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 20 23:54:20.480274 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.480360 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 20 23:54:20.480443 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 20 23:54:20.480523 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.480652 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 20 23:54:20.480736 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 20 23:54:20.480816 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.480831 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 20 23:54:20.480912 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 20 23:54:20.480993 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 20 23:54:20.481073 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:54:20.481105 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 20 23:54:20.481114 kernel: ACPI: button: Power Button [PWRB] Jan 20 23:54:20.481126 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 20 23:54:20.481221 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 20 23:54:20.481308 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 20 23:54:20.481320 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 23:54:20.481328 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 20 23:54:20.481409 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 20 23:54:20.481420 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 20 23:54:20.481430 kernel: thunder_xcv, ver 1.0 Jan 20 23:54:20.481438 kernel: thunder_bgx, ver 1.0 Jan 20 23:54:20.481446 kernel: nicpf, ver 1.0 Jan 20 23:54:20.481454 kernel: nicvf, ver 1.0 Jan 20 23:54:20.481570 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 23:54:20.481653 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T23:54:19 UTC (1768953259) Jan 20 23:54:20.481664 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 23:54:20.481676 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 20 23:54:20.481684 kernel: watchdog: NMI not fully supported Jan 20 23:54:20.481692 kernel: watchdog: Hard watchdog permanently disabled Jan 20 23:54:20.481701 kernel: NET: Registered PF_INET6 protocol family Jan 20 23:54:20.481709 kernel: Segment Routing with IPv6 Jan 20 23:54:20.481717 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 23:54:20.481725 kernel: NET: Registered PF_PACKET protocol family Jan 20 23:54:20.481735 kernel: Key type dns_resolver registered Jan 20 23:54:20.481743 kernel: registered taskstats version 1 Jan 20 23:54:20.481752 kernel: Loading compiled-in X.509 certificates Jan 20 23:54:20.481760 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ae4cb0460a35d8e9b47e83cc3a018fffd2136c96' Jan 20 23:54:20.481768 kernel: Demotion targets for Node 0: null Jan 20 23:54:20.481776 kernel: Key type .fscrypt registered Jan 20 23:54:20.481784 kernel: Key type fscrypt-provisioning registered Jan 20 23:54:20.481793 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 23:54:20.481802 kernel: ima: Allocated hash algorithm: sha1 Jan 20 23:54:20.481810 kernel: ima: No architecture policies found Jan 20 23:54:20.481818 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 23:54:20.481826 kernel: clk: Disabling unused clocks Jan 20 23:54:20.481834 kernel: PM: genpd: Disabling unused power domains Jan 20 23:54:20.481842 kernel: Freeing unused kernel memory: 12480K Jan 20 23:54:20.481852 kernel: Run /init as init process Jan 20 23:54:20.481861 kernel: with arguments: Jan 20 23:54:20.481869 kernel: /init Jan 20 23:54:20.481877 kernel: with environment: Jan 20 23:54:20.481885 kernel: HOME=/ Jan 20 23:54:20.481893 kernel: TERM=linux Jan 20 23:54:20.481901 kernel: ACPI: bus type USB registered Jan 20 23:54:20.481909 kernel: usbcore: registered new interface driver usbfs Jan 20 23:54:20.481918 kernel: usbcore: registered new interface driver hub Jan 20 23:54:20.481926 kernel: usbcore: registered new device driver usb Jan 20 23:54:20.482014 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 20 23:54:20.482131 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 20 23:54:20.482221 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 20 23:54:20.482303 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 20 23:54:20.482388 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 20 23:54:20.482621 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 20 23:54:20.482757 kernel: hub 1-0:1.0: USB hub found Jan 20 23:54:20.482850 kernel: hub 1-0:1.0: 4 ports detected Jan 20 23:54:20.482949 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 20 23:54:20.483376 kernel: hub 2-0:1.0: USB hub found Jan 20 23:54:20.483483 kernel: hub 2-0:1.0: 4 ports detected Jan 20 23:54:20.483494 kernel: SCSI subsystem initialized Jan 20 23:54:20.483622 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 20 23:54:20.483723 kernel: scsi host0: Virtio SCSI HBA Jan 20 23:54:20.483823 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 20 23:54:20.483934 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 20 23:54:20.484023 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 20 23:54:20.484611 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 20 23:54:20.484728 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 20 23:54:20.484817 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 20 23:54:20.484911 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 23:54:20.484922 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 23:54:20.484931 kernel: GPT:25804799 != 80003071 Jan 20 23:54:20.484939 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 23:54:20.484947 kernel: GPT:25804799 != 80003071 Jan 20 23:54:20.484955 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 23:54:20.484963 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 20 23:54:20.485051 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 20 23:54:20.485187 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 20 23:54:20.485278 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 20 23:54:20.485288 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 20 23:54:20.485375 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 20 23:54:20.485385 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 23:54:20.485396 kernel: device-mapper: uevent: version 1.0.3 Jan 20 23:54:20.486437 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 23:54:20.486458 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 23:54:20.486467 kernel: raid6: neonx8 gen() 15421 MB/s Jan 20 23:54:20.486476 kernel: raid6: neonx4 gen() 15480 MB/s Jan 20 23:54:20.486484 kernel: raid6: neonx2 gen() 12968 MB/s Jan 20 23:54:20.486493 kernel: raid6: neonx1 gen() 10259 MB/s Jan 20 23:54:20.486502 kernel: raid6: int64x8 gen() 6742 MB/s Jan 20 23:54:20.486515 kernel: raid6: int64x4 gen() 7311 MB/s Jan 20 23:54:20.486708 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 20 23:54:20.486723 kernel: raid6: int64x2 gen() 6071 MB/s Jan 20 23:54:20.486732 kernel: raid6: int64x1 gen() 4970 MB/s Jan 20 23:54:20.486740 kernel: raid6: using algorithm neonx4 gen() 15480 MB/s Jan 20 23:54:20.486749 kernel: raid6: .... xor() 11903 MB/s, rmw enabled Jan 20 23:54:20.486762 kernel: raid6: using neon recovery algorithm Jan 20 23:54:20.486770 kernel: xor: measuring software checksum speed Jan 20 23:54:20.486778 kernel: 8regs : 20182 MB/sec Jan 20 23:54:20.486786 kernel: 32regs : 19326 MB/sec Jan 20 23:54:20.486794 kernel: arm64_neon : 28244 MB/sec Jan 20 23:54:20.486802 kernel: xor: using function: arm64_neon (28244 MB/sec) Jan 20 23:54:20.486810 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 23:54:20.486820 kernel: BTRFS: device fsid c7d7174b-f392-4c72-bb61-0601db27f9ed devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (212) Jan 20 23:54:20.486828 kernel: BTRFS info (device dm-0): first mount of filesystem c7d7174b-f392-4c72-bb61-0601db27f9ed Jan 20 23:54:20.486837 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:54:20.486845 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 20 23:54:20.486854 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 23:54:20.486862 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 23:54:20.486870 kernel: loop: module loaded Jan 20 23:54:20.486880 kernel: loop0: detected capacity change from 0 to 91840 Jan 20 23:54:20.486889 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 23:54:20.486997 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 20 23:54:20.487010 systemd[1]: Successfully made /usr/ read-only. Jan 20 23:54:20.487022 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 23:54:20.487031 systemd[1]: Detected virtualization kvm. Jan 20 23:54:20.487042 systemd[1]: Detected architecture arm64. Jan 20 23:54:20.487050 systemd[1]: Running in initrd. Jan 20 23:54:20.487058 systemd[1]: No hostname configured, using default hostname. Jan 20 23:54:20.487067 systemd[1]: Hostname set to . Jan 20 23:54:20.487075 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 23:54:20.487103 systemd[1]: Queued start job for default target initrd.target. Jan 20 23:54:20.487114 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 23:54:20.487122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:54:20.487132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:54:20.487141 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 23:54:20.487150 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 23:54:20.487159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 23:54:20.487169 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 23:54:20.487178 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:54:20.487187 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:54:20.487196 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 23:54:20.487205 systemd[1]: Reached target paths.target - Path Units. Jan 20 23:54:20.487214 systemd[1]: Reached target slices.target - Slice Units. Jan 20 23:54:20.487225 systemd[1]: Reached target swap.target - Swaps. Jan 20 23:54:20.487234 systemd[1]: Reached target timers.target - Timer Units. Jan 20 23:54:20.487242 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 23:54:20.487251 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 23:54:20.487259 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:54:20.487268 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 23:54:20.487277 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 23:54:20.487287 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:54:20.487296 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 23:54:20.487304 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:54:20.487313 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 23:54:20.487322 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 23:54:20.487331 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 23:54:20.487339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 23:54:20.487349 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 23:54:20.487358 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 23:54:20.487367 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 23:54:20.487376 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 23:54:20.487385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 23:54:20.487395 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:54:20.487404 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 23:54:20.487413 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:54:20.487422 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 23:54:20.487430 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 23:54:20.487440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 23:54:20.487476 systemd-journald[346]: Collecting audit messages is enabled. Jan 20 23:54:20.487497 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:54:20.487508 kernel: Bridge firewalling registered Jan 20 23:54:20.487517 kernel: audit: type=1130 audit(1768953260.452:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.487526 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 23:54:20.487550 kernel: audit: type=1130 audit(1768953260.456:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.487559 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 23:54:20.487568 kernel: audit: type=1130 audit(1768953260.459:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.487579 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 23:54:20.487587 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 23:54:20.487596 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 23:54:20.487606 systemd-journald[346]: Journal started Jan 20 23:54:20.487626 systemd-journald[346]: Runtime Journal (/run/log/journal/91ccd8b326a6421d9e76294a60ecd57e) is 8M, max 76.5M, 68.5M free. Jan 20 23:54:20.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.454368 systemd-modules-load[349]: Inserted module 'br_netfilter' Jan 20 23:54:20.489629 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 23:54:20.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.495108 kernel: audit: type=1130 audit(1768953260.490:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.495278 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:54:20.499111 kernel: audit: type=1130 audit(1768953260.495:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.495000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.500486 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:54:20.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.504315 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 23:54:20.506184 kernel: audit: type=1130 audit(1768953260.501:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.506204 kernel: audit: type=1334 audit(1768953260.503:8): prog-id=6 op=LOAD Jan 20 23:54:20.503000 audit: BPF prog-id=6 op=LOAD Jan 20 23:54:20.508976 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 23:54:20.514132 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 23:54:20.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.519101 kernel: audit: type=1130 audit(1768953260.514:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.523659 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 23:54:20.532916 systemd-tmpfiles[382]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 23:54:20.541205 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:54:20.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.545124 kernel: audit: type=1130 audit(1768953260.541:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.553122 dracut-cmdline[385]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=3c423a3ed4865abab898483a94535823dbc3dcf7b9fc4db9a9e44dcb3b3370eb Jan 20 23:54:20.572360 systemd-resolved[379]: Positive Trust Anchors: Jan 20 23:54:20.572376 systemd-resolved[379]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 23:54:20.572379 systemd-resolved[379]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 23:54:20.572410 systemd-resolved[379]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 23:54:20.606203 systemd-resolved[379]: Defaulting to hostname 'linux'. Jan 20 23:54:20.607883 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 23:54:20.608747 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:54:20.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.671104 kernel: Loading iSCSI transport class v2.0-870. Jan 20 23:54:20.683102 kernel: iscsi: registered transport (tcp) Jan 20 23:54:20.696121 kernel: iscsi: registered transport (qla4xxx) Jan 20 23:54:20.696185 kernel: QLogic iSCSI HBA Driver Jan 20 23:54:20.725246 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 23:54:20.764288 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:54:20.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.767751 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 23:54:20.821511 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 23:54:20.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.824119 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 23:54:20.825599 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 23:54:20.875209 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 23:54:20.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.876000 audit: BPF prog-id=7 op=LOAD Jan 20 23:54:20.879000 audit: BPF prog-id=8 op=LOAD Jan 20 23:54:20.880100 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:54:20.913816 systemd-udevd[626]: Using default interface naming scheme 'v257'. Jan 20 23:54:20.922802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:54:20.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.926116 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 23:54:20.956982 dracut-pre-trigger[685]: rd.md=0: removing MD RAID activation Jan 20 23:54:20.976620 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 23:54:20.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:20.978000 audit: BPF prog-id=9 op=LOAD Jan 20 23:54:20.978992 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 23:54:20.999250 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 23:54:21.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:21.002447 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 23:54:21.023416 systemd-networkd[748]: lo: Link UP Jan 20 23:54:21.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:21.023424 systemd-networkd[748]: lo: Gained carrier Jan 20 23:54:21.024209 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 23:54:21.024912 systemd[1]: Reached target network.target - Network. Jan 20 23:54:21.080293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:54:21.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:21.082398 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 23:54:21.228122 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 20 23:54:21.239105 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 20 23:54:21.258906 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 20 23:54:21.266102 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 20 23:54:21.271525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 20 23:54:21.278930 systemd-networkd[748]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:21.278945 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:54:21.281305 systemd-networkd[748]: eth1: Link UP Jan 20 23:54:21.282199 systemd-networkd[748]: eth1: Gained carrier Jan 20 23:54:21.282227 systemd-networkd[748]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:21.289508 systemd-networkd[748]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:21.289521 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:54:21.292225 systemd-networkd[748]: eth0: Link UP Jan 20 23:54:21.294776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 20 23:54:21.295579 systemd-networkd[748]: eth0: Gained carrier Jan 20 23:54:21.295595 systemd-networkd[748]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:21.305441 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 20 23:54:21.309563 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 23:54:21.310188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:54:21.310307 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:54:21.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:21.313584 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:54:21.318490 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 20 23:54:21.318736 kernel: usbcore: registered new interface driver usbhid Jan 20 23:54:21.318750 kernel: usbhid: USB HID core driver Jan 20 23:54:21.321039 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:54:21.322149 systemd-networkd[748]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 20 23:54:21.332100 disk-uuid[811]: Primary Header is updated. Jan 20 23:54:21.332100 disk-uuid[811]: Secondary Entries is updated. Jan 20 23:54:21.332100 disk-uuid[811]: Secondary Header is updated. Jan 20 23:54:21.353209 systemd-networkd[748]: eth0: DHCPv4 address 188.245.60.37/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 20 23:54:21.372773 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:54:21.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:21.462154 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 23:54:21.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:21.463956 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 23:54:21.465623 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:54:21.467224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 23:54:21.470284 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 23:54:21.500998 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 23:54:21.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.364119 disk-uuid[812]: Warning: The kernel is still using the old partition table. Jan 20 23:54:22.364119 disk-uuid[812]: The new table will be used at the next reboot or after you Jan 20 23:54:22.364119 disk-uuid[812]: run partprobe(8) or kpartx(8) Jan 20 23:54:22.364119 disk-uuid[812]: The operation has completed successfully. Jan 20 23:54:22.373015 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 23:54:22.373195 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 23:54:22.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.375798 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 23:54:22.415578 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (840) Jan 20 23:54:22.415632 kernel: BTRFS info (device sda6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:54:22.416166 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:54:22.421230 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 20 23:54:22.421291 kernel: BTRFS info (device sda6): turning on async discard Jan 20 23:54:22.421319 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 23:54:22.430111 kernel: BTRFS info (device sda6): last unmount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:54:22.431237 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 23:54:22.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.434007 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 23:54:22.576016 ignition[859]: Ignition 2.24.0 Jan 20 23:54:22.576029 ignition[859]: Stage: fetch-offline Jan 20 23:54:22.576070 ignition[859]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:22.576187 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:22.576506 ignition[859]: parsed url from cmdline: "" Jan 20 23:54:22.576515 ignition[859]: no config URL provided Jan 20 23:54:22.578196 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 23:54:22.578251 ignition[859]: no config at "/usr/lib/ignition/user.ign" Jan 20 23:54:22.578264 ignition[859]: failed to fetch config: resource requires networking Jan 20 23:54:22.578686 ignition[859]: Ignition finished successfully Jan 20 23:54:22.583189 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 23:54:22.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.585892 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 23:54:22.623306 ignition[868]: Ignition 2.24.0 Jan 20 23:54:22.623324 ignition[868]: Stage: fetch Jan 20 23:54:22.623476 ignition[868]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:22.623484 ignition[868]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:22.623582 ignition[868]: parsed url from cmdline: "" Jan 20 23:54:22.623586 ignition[868]: no config URL provided Jan 20 23:54:22.623590 ignition[868]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 23:54:22.623596 ignition[868]: no config at "/usr/lib/ignition/user.ign" Jan 20 23:54:22.623626 ignition[868]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 20 23:54:22.629346 ignition[868]: GET result: OK Jan 20 23:54:22.630025 ignition[868]: parsing config with SHA512: 8705b6c96e32838bdb62eb1e7f0d0a75840797b8a2325ce21fbfa9453e090d3e766818096166c7da234bb65c6661d2e68c8980f3631fc50b272e2e0343b3ac94 Jan 20 23:54:22.636907 unknown[868]: fetched base config from "system" Jan 20 23:54:22.636920 unknown[868]: fetched base config from "system" Jan 20 23:54:22.638134 unknown[868]: fetched user config from "hetzner" Jan 20 23:54:22.638727 ignition[868]: fetch: fetch complete Jan 20 23:54:22.638736 ignition[868]: fetch: fetch passed Jan 20 23:54:22.638820 ignition[868]: Ignition finished successfully Jan 20 23:54:22.641635 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 23:54:22.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.645755 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 23:54:22.674340 systemd-networkd[748]: eth1: Gained IPv6LL Jan 20 23:54:22.691329 ignition[875]: Ignition 2.24.0 Jan 20 23:54:22.691348 ignition[875]: Stage: kargs Jan 20 23:54:22.691520 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:22.691544 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:22.692564 ignition[875]: kargs: kargs passed Jan 20 23:54:22.692622 ignition[875]: Ignition finished successfully Jan 20 23:54:22.696318 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 23:54:22.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.699257 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 23:54:22.725791 ignition[882]: Ignition 2.24.0 Jan 20 23:54:22.725805 ignition[882]: Stage: disks Jan 20 23:54:22.725962 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:22.725971 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:22.726838 ignition[882]: disks: disks passed Jan 20 23:54:22.728408 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 23:54:22.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.726889 ignition[882]: Ignition finished successfully Jan 20 23:54:22.729688 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 23:54:22.731494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 23:54:22.732348 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 23:54:22.732957 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 23:54:22.733685 systemd[1]: Reached target basic.target - Basic System. Jan 20 23:54:22.735699 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 23:54:22.739248 systemd-networkd[748]: eth0: Gained IPv6LL Jan 20 23:54:22.783005 systemd-fsck[890]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 20 23:54:22.787987 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 23:54:22.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:22.795150 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 23:54:22.886103 kernel: EXT4-fs (sda9): mounted filesystem 81ddf123-ac73-4435-a963-542e3692f093 r/w with ordered data mode. Quota mode: none. Jan 20 23:54:22.887621 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 23:54:22.889471 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 23:54:22.894459 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 23:54:22.896333 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 23:54:22.901362 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 20 23:54:22.902026 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 23:54:22.902064 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 23:54:22.916383 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 23:54:22.918490 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 23:54:22.923099 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (898) Jan 20 23:54:22.926158 kernel: BTRFS info (device sda6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:54:22.926199 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:54:22.931729 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 20 23:54:22.931783 kernel: BTRFS info (device sda6): turning on async discard Jan 20 23:54:22.932530 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 23:54:22.935422 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 23:54:22.973624 coreos-metadata[900]: Jan 20 23:54:22.973 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 20 23:54:22.977180 coreos-metadata[900]: Jan 20 23:54:22.975 INFO Fetch successful Jan 20 23:54:22.977180 coreos-metadata[900]: Jan 20 23:54:22.975 INFO wrote hostname ci-4547-0-0-n-f640cc67e1 to /sysroot/etc/hostname Jan 20 23:54:22.979656 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 23:54:22.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:23.104281 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 23:54:23.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:23.105961 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 23:54:23.110435 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 23:54:23.124169 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 23:54:23.126271 kernel: BTRFS info (device sda6): last unmount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:54:23.150451 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 23:54:23.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:23.162549 ignition[1000]: INFO : Ignition 2.24.0 Jan 20 23:54:23.162549 ignition[1000]: INFO : Stage: mount Jan 20 23:54:23.163700 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:23.163700 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:23.163700 ignition[1000]: INFO : mount: mount passed Jan 20 23:54:23.163700 ignition[1000]: INFO : Ignition finished successfully Jan 20 23:54:23.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:23.165716 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 23:54:23.169380 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 23:54:23.890679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 23:54:23.925111 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1011) Jan 20 23:54:23.925611 kernel: BTRFS info (device sda6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:54:23.926721 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:54:23.930481 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 20 23:54:23.930534 kernel: BTRFS info (device sda6): turning on async discard Jan 20 23:54:23.930555 kernel: BTRFS info (device sda6): enabling free space tree Jan 20 23:54:23.933236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 23:54:23.970589 ignition[1028]: INFO : Ignition 2.24.0 Jan 20 23:54:23.970589 ignition[1028]: INFO : Stage: files Jan 20 23:54:23.973063 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:23.973063 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:23.973063 ignition[1028]: DEBUG : files: compiled without relabeling support, skipping Jan 20 23:54:23.975340 ignition[1028]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 23:54:23.975340 ignition[1028]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 23:54:23.977978 ignition[1028]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 23:54:23.978964 ignition[1028]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 23:54:23.980043 ignition[1028]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 23:54:23.979072 unknown[1028]: wrote ssh authorized keys file for user: core Jan 20 23:54:23.982624 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 23:54:23.983822 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 20 23:54:24.076875 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 23:54:24.178659 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 20 23:54:24.178659 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 23:54:24.184846 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 20 23:54:24.478372 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 23:54:24.973858 ignition[1028]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 20 23:54:24.973858 ignition[1028]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 23:54:24.977607 ignition[1028]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 23:54:24.978676 ignition[1028]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 23:54:24.978676 ignition[1028]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 23:54:24.978676 ignition[1028]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 23:54:24.982462 ignition[1028]: INFO : files: files passed Jan 20 23:54:24.982462 ignition[1028]: INFO : Ignition finished successfully Jan 20 23:54:24.998495 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 20 23:54:24.998532 kernel: audit: type=1130 audit(1768953264.988:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:24.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:24.984398 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 23:54:24.989851 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 23:54:24.994373 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 23:54:25.005418 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 23:54:25.005580 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 23:54:25.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.012004 kernel: audit: type=1130 audit(1768953265.006:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.012032 kernel: audit: type=1131 audit(1768953265.007:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.017478 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:54:25.017478 initrd-setup-root-after-ignition[1060]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:54:25.021486 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:54:25.024450 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 23:54:25.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.027434 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 23:54:25.032795 kernel: audit: type=1130 audit(1768953265.027:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.033182 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 23:54:25.095950 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 23:54:25.096129 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 23:54:25.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.100479 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 23:54:25.101244 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 23:54:25.104025 kernel: audit: type=1130 audit(1768953265.097:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.104054 kernel: audit: type=1131 audit(1768953265.100:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.103983 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 23:54:25.104904 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 23:54:25.150169 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 23:54:25.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.157140 kernel: audit: type=1130 audit(1768953265.152:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.156270 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 23:54:25.179315 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 23:54:25.180555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:54:25.182821 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:54:25.184901 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 23:54:25.186159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 23:54:25.187000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.186315 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 23:54:25.187848 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 23:54:25.191655 kernel: audit: type=1131 audit(1768953265.187:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.189971 systemd[1]: Stopped target basic.target - Basic System. Jan 20 23:54:25.191166 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 23:54:25.192376 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 23:54:25.193566 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 23:54:25.194800 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 23:54:25.195991 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 23:54:25.197140 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 23:54:25.198447 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 23:54:25.199559 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 23:54:25.200685 systemd[1]: Stopped target swap.target - Swaps. Jan 20 23:54:25.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.201574 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 23:54:25.205338 kernel: audit: type=1131 audit(1768953265.202:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.201701 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 23:54:25.203283 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:54:25.207231 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:54:25.207931 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 23:54:25.211196 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:54:25.211947 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 23:54:25.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.212072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 23:54:25.217418 kernel: audit: type=1131 audit(1768953265.213:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.214281 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 23:54:25.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.214450 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 23:54:25.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.217135 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 23:54:25.217340 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 23:54:25.218235 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 20 23:54:25.218404 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 20 23:54:25.222233 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 23:54:25.223043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 23:54:25.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.223260 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:54:25.227234 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 23:54:25.229626 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 23:54:25.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.229822 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:54:25.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.230950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 23:54:25.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.231120 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:54:25.232010 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 23:54:25.232181 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 23:54:25.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.241000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.240620 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 23:54:25.240738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 23:54:25.255107 ignition[1085]: INFO : Ignition 2.24.0 Jan 20 23:54:25.255107 ignition[1085]: INFO : Stage: umount Jan 20 23:54:25.257974 ignition[1085]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:54:25.257974 ignition[1085]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 20 23:54:25.260626 ignition[1085]: INFO : umount: umount passed Jan 20 23:54:25.260626 ignition[1085]: INFO : Ignition finished successfully Jan 20 23:54:25.258302 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 23:54:25.263755 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 23:54:25.263875 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 23:54:25.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.265232 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 23:54:25.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.265326 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 23:54:25.266413 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 23:54:25.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.266507 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 23:54:25.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.267583 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 23:54:25.269000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.267645 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 23:54:25.268758 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 23:54:25.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.268802 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 23:54:25.269796 systemd[1]: Stopped target network.target - Network. Jan 20 23:54:25.270801 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 23:54:25.270857 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 23:54:25.272851 systemd[1]: Stopped target paths.target - Path Units. Jan 20 23:54:25.274614 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 23:54:25.278166 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:54:25.278863 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 23:54:25.280423 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 23:54:25.282147 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 23:54:25.282231 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 23:54:25.283245 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 23:54:25.283279 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 23:54:25.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.284129 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 23:54:25.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.284157 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:54:25.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.285154 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 23:54:25.285215 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 23:54:25.286129 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 23:54:25.286172 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 23:54:25.287140 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 23:54:25.287187 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 23:54:25.288325 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 23:54:25.289567 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 23:54:25.298248 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 23:54:25.299121 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 23:54:25.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.305000 audit: BPF prog-id=6 op=UNLOAD Jan 20 23:54:25.305465 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 23:54:25.305647 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 23:54:25.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.308000 audit: BPF prog-id=9 op=UNLOAD Jan 20 23:54:25.309188 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 23:54:25.309888 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 23:54:25.309928 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:54:25.313694 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 23:54:25.314243 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 23:54:25.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.314308 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 23:54:25.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.315073 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 23:54:25.315142 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:54:25.316209 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 23:54:25.316250 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 23:54:25.319155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:54:25.340778 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 23:54:25.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.340896 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:54:25.345322 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 23:54:25.345394 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 23:54:25.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.346058 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 23:54:25.347335 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:54:25.347954 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 23:54:25.348015 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 23:54:25.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.353397 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 23:54:25.353470 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 23:54:25.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.354799 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 23:54:25.354878 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 23:54:25.359150 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 23:54:25.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.359916 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 23:54:25.359984 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:54:25.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.360989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 23:54:25.361036 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:54:25.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.363288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:54:25.363348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:54:25.370593 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 23:54:25.372830 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 23:54:25.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.380851 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 23:54:25.381065 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 23:54:25.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:25.384661 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 23:54:25.386917 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 23:54:25.420468 systemd[1]: Switching root. Jan 20 23:54:25.466248 systemd-journald[346]: Journal stopped Jan 20 23:54:26.505179 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Jan 20 23:54:26.505265 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 23:54:26.505283 kernel: SELinux: policy capability open_perms=1 Jan 20 23:54:26.505294 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 23:54:26.505305 kernel: SELinux: policy capability always_check_network=0 Jan 20 23:54:26.505316 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 23:54:26.505327 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 23:54:26.505338 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 23:54:26.505352 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 23:54:26.505368 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 23:54:26.505380 systemd[1]: Successfully loaded SELinux policy in 62.782ms. Jan 20 23:54:26.505403 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.468ms. Jan 20 23:54:26.505422 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 23:54:26.505434 systemd[1]: Detected virtualization kvm. Jan 20 23:54:26.505446 systemd[1]: Detected architecture arm64. Jan 20 23:54:26.505459 systemd[1]: Detected first boot. Jan 20 23:54:26.505474 systemd[1]: Hostname set to . Jan 20 23:54:26.505485 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 23:54:26.505497 zram_generator::config[1128]: No configuration found. Jan 20 23:54:26.505526 kernel: NET: Registered PF_VSOCK protocol family Jan 20 23:54:26.505541 systemd[1]: Populated /etc with preset unit settings. Jan 20 23:54:26.505555 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 23:54:26.505568 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 23:54:26.505580 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 23:54:26.505597 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 23:54:26.505610 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 23:54:26.505624 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 23:54:26.505636 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 23:54:26.505648 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 23:54:26.505661 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 23:54:26.505673 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 23:54:26.505686 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 23:54:26.505697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:54:26.505709 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:54:26.505720 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 23:54:26.505732 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 23:54:26.505744 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 23:54:26.505756 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 23:54:26.505767 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 23:54:26.505778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:54:26.505789 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:54:26.505803 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 23:54:26.505815 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 23:54:26.505826 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 23:54:26.505837 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 23:54:26.505849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:54:26.505860 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 23:54:26.505872 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 23:54:26.505885 systemd[1]: Reached target slices.target - Slice Units. Jan 20 23:54:26.505897 systemd[1]: Reached target swap.target - Swaps. Jan 20 23:54:26.505909 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 23:54:26.505920 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 23:54:26.505931 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 23:54:26.505943 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:54:26.505955 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 23:54:26.505968 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:54:26.505980 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 23:54:26.505992 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 23:54:26.506003 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 23:54:26.506014 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:54:26.506025 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 23:54:26.506037 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 23:54:26.506049 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 23:54:26.506060 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 23:54:26.506072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 23:54:26.506104 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 23:54:26.506117 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 23:54:26.506130 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 23:54:26.506141 systemd[1]: Reached target machines.target - Containers. Jan 20 23:54:26.506155 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 23:54:26.506167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:54:26.506178 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 23:54:26.506190 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 23:54:26.506201 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:54:26.506212 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 23:54:26.506224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:54:26.506237 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 23:54:26.506250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:54:26.506262 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 23:54:26.506278 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 23:54:26.506290 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 23:54:26.506302 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 23:54:26.506313 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 23:54:26.506325 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:54:26.506336 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 23:54:26.506348 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 23:54:26.506361 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 23:54:26.506372 kernel: fuse: init (API version 7.41) Jan 20 23:54:26.506383 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 23:54:26.506396 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 23:54:26.506408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 23:54:26.506419 kernel: ACPI: bus type drm_connector registered Jan 20 23:54:26.506429 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 23:54:26.506443 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 23:54:26.506455 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 23:54:26.506466 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 23:54:26.506477 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 23:54:26.506488 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 23:54:26.506500 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:54:26.506549 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 23:54:26.506562 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 23:54:26.506574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:54:26.506585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:54:26.506596 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 23:54:26.506610 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 23:54:26.506621 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:54:26.506632 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:54:26.506644 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 23:54:26.506655 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 23:54:26.506667 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:54:26.506678 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:54:26.506691 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 23:54:26.506705 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 23:54:26.506717 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:54:26.506728 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 23:54:26.506740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 23:54:26.506751 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:54:26.506763 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 23:54:26.506776 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 23:54:26.506789 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 23:54:26.506801 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 23:54:26.506813 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 23:54:26.506824 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 23:54:26.506837 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 23:54:26.506848 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 23:54:26.506862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:54:26.506874 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:54:26.506886 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 23:54:26.506899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:54:26.506942 systemd-journald[1193]: Collecting audit messages is enabled. Jan 20 23:54:26.506971 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 23:54:26.506986 systemd-journald[1193]: Journal started Jan 20 23:54:26.507009 systemd-journald[1193]: Runtime Journal (/run/log/journal/91ccd8b326a6421d9e76294a60ecd57e) is 8M, max 76.5M, 68.5M free. Jan 20 23:54:26.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.515200 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 23:54:26.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.359000 audit: BPF prog-id=14 op=UNLOAD Jan 20 23:54:26.359000 audit: BPF prog-id=13 op=UNLOAD Jan 20 23:54:26.361000 audit: BPF prog-id=15 op=LOAD Jan 20 23:54:26.361000 audit: BPF prog-id=16 op=LOAD Jan 20 23:54:26.362000 audit: BPF prog-id=17 op=LOAD Jan 20 23:54:26.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.428000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.499000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 23:54:26.499000 audit[1193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff81ae060 a2=4000 a3=0 items=0 ppid=1 pid=1193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:54:26.499000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 23:54:26.173186 systemd[1]: Queued start job for default target multi-user.target. Jan 20 23:54:26.183970 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 20 23:54:26.184810 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 23:54:26.520803 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 23:54:26.520857 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 23:54:26.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.538578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:54:26.542861 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 23:54:26.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.552159 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 23:54:26.556298 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 23:54:26.559124 kernel: loop1: detected capacity change from 0 to 100192 Jan 20 23:54:26.561453 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 23:54:26.567365 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 23:54:26.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.580398 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 23:54:26.591258 systemd-journald[1193]: Time spent on flushing to /var/log/journal/91ccd8b326a6421d9e76294a60ecd57e is 29.992ms for 1296 entries. Jan 20 23:54:26.591258 systemd-journald[1193]: System Journal (/var/log/journal/91ccd8b326a6421d9e76294a60ecd57e) is 8M, max 588.1M, 580.1M free. Jan 20 23:54:26.637222 systemd-journald[1193]: Received client request to flush runtime journal. Jan 20 23:54:26.637294 kernel: loop2: detected capacity change from 0 to 207008 Jan 20 23:54:26.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.604980 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 23:54:26.633603 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:54:26.639590 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 23:54:26.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.648116 kernel: loop3: detected capacity change from 0 to 8 Jan 20 23:54:26.662255 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 23:54:26.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.664000 audit: BPF prog-id=18 op=LOAD Jan 20 23:54:26.664000 audit: BPF prog-id=19 op=LOAD Jan 20 23:54:26.664000 audit: BPF prog-id=20 op=LOAD Jan 20 23:54:26.665100 kernel: loop4: detected capacity change from 0 to 45344 Jan 20 23:54:26.668318 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 23:54:26.675000 audit: BPF prog-id=21 op=LOAD Jan 20 23:54:26.678385 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 23:54:26.684300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 23:54:26.688000 audit: BPF prog-id=22 op=LOAD Jan 20 23:54:26.688000 audit: BPF prog-id=23 op=LOAD Jan 20 23:54:26.688000 audit: BPF prog-id=24 op=LOAD Jan 20 23:54:26.691148 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 23:54:26.694000 audit: BPF prog-id=25 op=LOAD Jan 20 23:54:26.695000 audit: BPF prog-id=26 op=LOAD Jan 20 23:54:26.695000 audit: BPF prog-id=27 op=LOAD Jan 20 23:54:26.697534 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 23:54:26.711116 kernel: loop5: detected capacity change from 0 to 100192 Jan 20 23:54:26.728167 kernel: loop6: detected capacity change from 0 to 207008 Jan 20 23:54:26.741948 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 20 23:54:26.741972 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Jan 20 23:54:26.753206 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:54:26.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.763384 kernel: loop7: detected capacity change from 0 to 8 Jan 20 23:54:26.765102 kernel: loop1: detected capacity change from 0 to 45344 Jan 20 23:54:26.778791 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 23:54:26.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:26.780391 systemd-nsresourced[1273]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 23:54:26.785496 (sd-merge)[1275]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 20 23:54:26.788833 (sd-merge)[1275]: Merged extensions into '/usr'. Jan 20 23:54:26.800252 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 23:54:26.800271 systemd[1]: Reloading... Jan 20 23:54:26.889132 zram_generator::config[1315]: No configuration found. Jan 20 23:54:26.971440 systemd-oomd[1269]: No swap; memory pressure usage will be degraded Jan 20 23:54:26.976012 systemd-resolved[1270]: Positive Trust Anchors: Jan 20 23:54:26.976365 systemd-resolved[1270]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 23:54:26.976422 systemd-resolved[1270]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 23:54:26.976492 systemd-resolved[1270]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 23:54:26.990780 systemd-resolved[1270]: Using system hostname 'ci-4547-0-0-n-f640cc67e1'. Jan 20 23:54:27.107720 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 23:54:27.108039 systemd[1]: Reloading finished in 307 ms. Jan 20 23:54:27.121655 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 23:54:27.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.123919 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 23:54:27.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.125359 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 23:54:27.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.126424 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 23:54:27.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.130264 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:54:27.139392 systemd[1]: Starting ensure-sysext.service... Jan 20 23:54:27.141340 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 23:54:27.145000 audit: BPF prog-id=28 op=LOAD Jan 20 23:54:27.145000 audit: BPF prog-id=22 op=UNLOAD Jan 20 23:54:27.145000 audit: BPF prog-id=29 op=LOAD Jan 20 23:54:27.145000 audit: BPF prog-id=30 op=LOAD Jan 20 23:54:27.145000 audit: BPF prog-id=23 op=UNLOAD Jan 20 23:54:27.145000 audit: BPF prog-id=24 op=UNLOAD Jan 20 23:54:27.146000 audit: BPF prog-id=31 op=LOAD Jan 20 23:54:27.146000 audit: BPF prog-id=21 op=UNLOAD Jan 20 23:54:27.147000 audit: BPF prog-id=32 op=LOAD Jan 20 23:54:27.150000 audit: BPF prog-id=15 op=UNLOAD Jan 20 23:54:27.150000 audit: BPF prog-id=33 op=LOAD Jan 20 23:54:27.150000 audit: BPF prog-id=34 op=LOAD Jan 20 23:54:27.150000 audit: BPF prog-id=16 op=UNLOAD Jan 20 23:54:27.150000 audit: BPF prog-id=17 op=UNLOAD Jan 20 23:54:27.151000 audit: BPF prog-id=35 op=LOAD Jan 20 23:54:27.151000 audit: BPF prog-id=18 op=UNLOAD Jan 20 23:54:27.152000 audit: BPF prog-id=36 op=LOAD Jan 20 23:54:27.152000 audit: BPF prog-id=37 op=LOAD Jan 20 23:54:27.152000 audit: BPF prog-id=19 op=UNLOAD Jan 20 23:54:27.152000 audit: BPF prog-id=20 op=UNLOAD Jan 20 23:54:27.153000 audit: BPF prog-id=38 op=LOAD Jan 20 23:54:27.153000 audit: BPF prog-id=25 op=UNLOAD Jan 20 23:54:27.153000 audit: BPF prog-id=39 op=LOAD Jan 20 23:54:27.153000 audit: BPF prog-id=40 op=LOAD Jan 20 23:54:27.153000 audit: BPF prog-id=26 op=UNLOAD Jan 20 23:54:27.153000 audit: BPF prog-id=27 op=UNLOAD Jan 20 23:54:27.165921 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 23:54:27.168054 systemd[1]: Reload requested from client PID 1355 ('systemctl') (unit ensure-sysext.service)... Jan 20 23:54:27.168071 systemd[1]: Reloading... Jan 20 23:54:27.170207 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 23:54:27.170483 systemd-tmpfiles[1356]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 23:54:27.171450 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 20 23:54:27.171498 systemd-tmpfiles[1356]: ACLs are not supported, ignoring. Jan 20 23:54:27.184739 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 23:54:27.184750 systemd-tmpfiles[1356]: Skipping /boot Jan 20 23:54:27.199368 systemd-tmpfiles[1356]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 23:54:27.199383 systemd-tmpfiles[1356]: Skipping /boot Jan 20 23:54:27.245135 zram_generator::config[1388]: No configuration found. Jan 20 23:54:27.405521 systemd[1]: Reloading finished in 237 ms. Jan 20 23:54:27.435257 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 23:54:27.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.439000 audit: BPF prog-id=41 op=LOAD Jan 20 23:54:27.439000 audit: BPF prog-id=35 op=UNLOAD Jan 20 23:54:27.439000 audit: BPF prog-id=42 op=LOAD Jan 20 23:54:27.439000 audit: BPF prog-id=43 op=LOAD Jan 20 23:54:27.439000 audit: BPF prog-id=36 op=UNLOAD Jan 20 23:54:27.439000 audit: BPF prog-id=37 op=UNLOAD Jan 20 23:54:27.440000 audit: BPF prog-id=44 op=LOAD Jan 20 23:54:27.440000 audit: BPF prog-id=38 op=UNLOAD Jan 20 23:54:27.440000 audit: BPF prog-id=45 op=LOAD Jan 20 23:54:27.440000 audit: BPF prog-id=46 op=LOAD Jan 20 23:54:27.440000 audit: BPF prog-id=39 op=UNLOAD Jan 20 23:54:27.440000 audit: BPF prog-id=40 op=UNLOAD Jan 20 23:54:27.441000 audit: BPF prog-id=47 op=LOAD Jan 20 23:54:27.441000 audit: BPF prog-id=31 op=UNLOAD Jan 20 23:54:27.441000 audit: BPF prog-id=48 op=LOAD Jan 20 23:54:27.441000 audit: BPF prog-id=32 op=UNLOAD Jan 20 23:54:27.442000 audit: BPF prog-id=49 op=LOAD Jan 20 23:54:27.442000 audit: BPF prog-id=50 op=LOAD Jan 20 23:54:27.442000 audit: BPF prog-id=33 op=UNLOAD Jan 20 23:54:27.442000 audit: BPF prog-id=34 op=UNLOAD Jan 20 23:54:27.443000 audit: BPF prog-id=51 op=LOAD Jan 20 23:54:27.452000 audit: BPF prog-id=28 op=UNLOAD Jan 20 23:54:27.452000 audit: BPF prog-id=52 op=LOAD Jan 20 23:54:27.452000 audit: BPF prog-id=53 op=LOAD Jan 20 23:54:27.452000 audit: BPF prog-id=29 op=UNLOAD Jan 20 23:54:27.453000 audit: BPF prog-id=30 op=UNLOAD Jan 20 23:54:27.456631 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:54:27.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.466069 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 23:54:27.468321 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 23:54:27.471541 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 23:54:27.476391 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 23:54:27.477000 audit: BPF prog-id=8 op=UNLOAD Jan 20 23:54:27.477000 audit: BPF prog-id=7 op=UNLOAD Jan 20 23:54:27.480000 audit: BPF prog-id=54 op=LOAD Jan 20 23:54:27.480000 audit: BPF prog-id=55 op=LOAD Jan 20 23:54:27.483731 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:54:27.490600 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 23:54:27.494565 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:54:27.499692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:54:27.504498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:54:27.519104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:54:27.523128 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:54:27.523354 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:54:27.523450 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:54:27.526636 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:54:27.526804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:54:27.526952 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:54:27.527032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:54:27.530970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:54:27.532551 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 23:54:27.533420 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:54:27.533616 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:54:27.533702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:54:27.540619 systemd[1]: Finished ensure-sysext.service. Jan 20 23:54:27.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.543000 audit: BPF prog-id=56 op=LOAD Jan 20 23:54:27.549524 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 20 23:54:27.556000 audit[1432]: SYSTEM_BOOT pid=1432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.578597 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 23:54:27.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.582141 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 23:54:27.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.585071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:54:27.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.587000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.585765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:54:27.604790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:54:27.605465 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:54:27.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.607899 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 23:54:27.608470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 23:54:27.610000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.611373 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:54:27.611595 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:54:27.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:54:27.614024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:54:27.616044 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:54:27.626369 systemd-udevd[1431]: Using default interface naming scheme 'v257'. Jan 20 23:54:27.642000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 23:54:27.642000 audit[1468]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff4670490 a2=420 a3=0 items=0 ppid=1427 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:54:27.642000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:54:27.644181 augenrules[1468]: No rules Jan 20 23:54:27.647382 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 23:54:27.650584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 23:54:27.651710 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 23:54:27.657223 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 23:54:27.669062 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 20 23:54:27.671315 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 23:54:27.684428 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:54:27.689811 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 23:54:27.768401 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 23:54:27.853214 systemd-networkd[1481]: lo: Link UP Jan 20 23:54:27.853227 systemd-networkd[1481]: lo: Gained carrier Jan 20 23:54:27.856436 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 23:54:27.857911 systemd-networkd[1481]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:27.857920 systemd-networkd[1481]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:54:27.858276 systemd[1]: Reached target network.target - Network. Jan 20 23:54:27.861246 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 23:54:27.862321 systemd-networkd[1481]: eth0: Link UP Jan 20 23:54:27.865384 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 23:54:27.866362 systemd-networkd[1481]: eth0: Gained carrier Jan 20 23:54:27.866386 systemd-networkd[1481]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:27.912027 systemd-networkd[1481]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:27.912041 systemd-networkd[1481]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:54:27.914938 systemd-networkd[1481]: eth1: Link UP Jan 20 23:54:27.915230 systemd-networkd[1481]: eth1: Gained carrier Jan 20 23:54:27.915254 systemd-networkd[1481]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:54:27.936197 systemd-networkd[1481]: eth0: DHCPv4 address 188.245.60.37/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 20 23:54:27.936862 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 20 23:54:27.940806 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 23:54:27.942152 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 23:54:27.954188 systemd-networkd[1481]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 20 23:54:27.954573 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 20 23:54:27.955236 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 20 23:54:28.044668 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 20 23:54:28.044794 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:54:28.046728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:54:28.050434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:54:28.052932 ldconfig[1429]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 23:54:28.055345 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:54:28.056241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:54:28.056339 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:54:28.056370 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:54:28.056398 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 23:54:28.062565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 23:54:28.081965 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 23:54:28.084781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:54:28.085048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:54:28.086406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:54:28.086640 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:54:28.093230 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 20 23:54:28.105776 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 20 23:54:28.105867 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 20 23:54:28.105882 kernel: [drm] features: -context_init Jan 20 23:54:28.105459 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:54:28.105748 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:54:28.108169 kernel: [drm] number of scanouts: 1 Jan 20 23:54:28.108231 kernel: [drm] number of cap sets: 0 Jan 20 23:54:28.116264 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 23:54:28.116954 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:54:28.117021 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:54:28.123124 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 20 23:54:28.125180 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 23:54:28.127460 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 23:54:28.128257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 23:54:28.129721 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 23:54:28.131448 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 23:54:28.133311 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 23:54:28.134253 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 23:54:28.136218 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 23:54:28.143328 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 23:54:28.144434 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 23:54:28.144481 systemd[1]: Reached target paths.target - Path Units. Jan 20 23:54:28.145307 systemd[1]: Reached target timers.target - Timer Units. Jan 20 23:54:28.146827 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 23:54:28.157299 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 20 23:54:28.163546 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 23:54:28.166110 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 23:54:28.173364 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 23:54:28.174271 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 23:54:28.176209 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 23:54:28.181898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 23:54:28.183633 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 23:54:28.188176 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 23:54:28.190410 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 23:54:28.191914 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 23:54:28.194118 systemd[1]: Reached target basic.target - Basic System. Jan 20 23:54:28.195146 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 23:54:28.195187 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 23:54:28.199544 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 23:54:28.203630 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 23:54:28.208333 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 23:54:28.211547 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 23:54:28.218467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 23:54:28.226854 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 23:54:28.227559 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 23:54:28.233110 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 23:54:28.237716 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 23:54:28.244799 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 20 23:54:28.248401 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 23:54:28.255761 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 23:54:28.266337 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 23:54:28.267713 jq[1556]: false Jan 20 23:54:28.268177 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 23:54:28.268753 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 23:54:28.273347 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 23:54:28.277419 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 23:54:28.280773 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 23:54:28.281815 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 23:54:28.283187 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 23:54:28.283519 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 23:54:28.283712 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 23:54:28.285415 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 23:54:28.286026 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 23:54:28.293040 jq[1573]: true Jan 20 23:54:28.343417 jq[1583]: true Jan 20 23:54:28.353180 extend-filesystems[1557]: Found /dev/sda6 Jan 20 23:54:28.356566 extend-filesystems[1557]: Found /dev/sda9 Jan 20 23:54:28.366304 extend-filesystems[1557]: Checking size of /dev/sda9 Jan 20 23:54:28.376266 tar[1579]: linux-arm64/LICENSE Jan 20 23:54:28.376266 tar[1579]: linux-arm64/helm Jan 20 23:54:28.382233 update_engine[1572]: I20260120 23:54:28.381709 1572 main.cc:92] Flatcar Update Engine starting Jan 20 23:54:28.382530 coreos-metadata[1552]: Jan 20 23:54:28.381 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 20 23:54:28.386923 coreos-metadata[1552]: Jan 20 23:54:28.386 INFO Fetch successful Jan 20 23:54:28.389956 coreos-metadata[1552]: Jan 20 23:54:28.387 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 20 23:54:28.393983 coreos-metadata[1552]: Jan 20 23:54:28.392 INFO Fetch successful Jan 20 23:54:28.411647 dbus-daemon[1554]: [system] SELinux support is enabled Jan 20 23:54:28.412232 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 23:54:28.417388 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 23:54:28.421001 extend-filesystems[1557]: Resized partition /dev/sda9 Jan 20 23:54:28.417567 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 23:54:28.419216 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 23:54:28.419335 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 23:54:28.432171 extend-filesystems[1613]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 23:54:28.430447 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:54:28.442565 update_engine[1572]: I20260120 23:54:28.441458 1572 update_check_scheduler.cc:74] Next update check in 10m51s Jan 20 23:54:28.441735 systemd[1]: Started update-engine.service - Update Engine. Jan 20 23:54:28.462112 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 23:54:28.466155 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 20 23:54:28.519404 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Jan 20 23:54:28.555093 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 23:54:28.565457 systemd[1]: Starting sshkeys.service... Jan 20 23:54:28.566092 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 20 23:54:28.574469 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:54:28.575249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:54:28.582073 extend-filesystems[1613]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 20 23:54:28.582073 extend-filesystems[1613]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 20 23:54:28.582073 extend-filesystems[1613]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 20 23:54:28.591315 extend-filesystems[1557]: Resized filesystem in /dev/sda9 Jan 20 23:54:28.604263 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 23:54:28.604554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 23:54:28.606405 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 23:54:28.623254 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 23:54:28.627006 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 23:54:28.629205 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 23:54:28.631408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:54:28.706479 coreos-metadata[1646]: Jan 20 23:54:28.706 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 20 23:54:28.710388 coreos-metadata[1646]: Jan 20 23:54:28.710 INFO Fetch successful Jan 20 23:54:28.716241 unknown[1646]: wrote ssh authorized keys file for user: core Jan 20 23:54:28.727299 containerd[1592]: time="2026-01-20T23:54:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 23:54:28.745096 containerd[1592]: time="2026-01-20T23:54:28.743607080Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 23:54:28.788667 update-ssh-keys[1652]: Updated "/home/core/.ssh/authorized_keys" Jan 20 23:54:28.792864 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 23:54:28.803655 systemd[1]: Finished sshkeys.service. Jan 20 23:54:28.806197 containerd[1592]: time="2026-01-20T23:54:28.805977720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.72µs" Jan 20 23:54:28.806197 containerd[1592]: time="2026-01-20T23:54:28.806021080Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 23:54:28.806197 containerd[1592]: time="2026-01-20T23:54:28.806064800Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 23:54:28.806197 containerd[1592]: time="2026-01-20T23:54:28.806092000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 23:54:28.806358 containerd[1592]: time="2026-01-20T23:54:28.806235760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 23:54:28.806358 containerd[1592]: time="2026-01-20T23:54:28.806252400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806358 containerd[1592]: time="2026-01-20T23:54:28.806308080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806358 containerd[1592]: time="2026-01-20T23:54:28.806319080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806845 containerd[1592]: time="2026-01-20T23:54:28.806596000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806845 containerd[1592]: time="2026-01-20T23:54:28.806619200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806845 containerd[1592]: time="2026-01-20T23:54:28.806631240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806845 containerd[1592]: time="2026-01-20T23:54:28.806639360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806845 containerd[1592]: time="2026-01-20T23:54:28.806784560Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806845 containerd[1592]: time="2026-01-20T23:54:28.806797320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 23:54:28.806984 containerd[1592]: time="2026-01-20T23:54:28.806877480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.808550 containerd[1592]: time="2026-01-20T23:54:28.807038760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.810663 containerd[1592]: time="2026-01-20T23:54:28.807073800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 23:54:28.810663 containerd[1592]: time="2026-01-20T23:54:28.810134360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 23:54:28.810663 containerd[1592]: time="2026-01-20T23:54:28.810184720Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 23:54:28.810663 containerd[1592]: time="2026-01-20T23:54:28.810488480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 23:54:28.810663 containerd[1592]: time="2026-01-20T23:54:28.810597680Z" level=info msg="metadata content store policy set" policy=shared Jan 20 23:54:28.822646 containerd[1592]: time="2026-01-20T23:54:28.822585680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822732640Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822829320Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822843360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822857240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822877880Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822891280Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822904240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822916920Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822928800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822939080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822962520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822974440Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 23:54:28.825492 containerd[1592]: time="2026-01-20T23:54:28.822988240Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823149440Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823183960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823202720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823212600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823223280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823232960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823254040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823274600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823286880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823297360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823307200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823340960Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823383040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823409480Z" level=info msg="Start snapshots syncer" Jan 20 23:54:28.825770 containerd[1592]: time="2026-01-20T23:54:28.823439760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 23:54:28.826110 containerd[1592]: time="2026-01-20T23:54:28.823766760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 23:54:28.826110 containerd[1592]: time="2026-01-20T23:54:28.823827800Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.823891280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824006240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824045840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824056920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824066760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824093880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824115840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824127000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824137520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824148680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824186600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824200400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 23:54:28.826219 containerd[1592]: time="2026-01-20T23:54:28.824209560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824218880Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824227160Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824246080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824258760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824345200Z" level=info msg="runtime interface created" Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824353200Z" level=info msg="created NRI interface" Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824363400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824375640Z" level=info msg="Connect containerd service" Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.824403560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 23:54:28.826455 containerd[1592]: time="2026-01-20T23:54:28.825583080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 23:54:28.913922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:54:28.951857 systemd-logind[1569]: New seat seat0. Jan 20 23:54:28.960214 systemd-logind[1569]: Watching system buttons on /dev/input/event0 (Power Button) Jan 20 23:54:28.960328 systemd-logind[1569]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 20 23:54:28.960678 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 23:54:29.000118 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 23:54:29.035168 containerd[1592]: time="2026-01-20T23:54:29.035106000Z" level=info msg="Start subscribing containerd event" Jan 20 23:54:29.035322 containerd[1592]: time="2026-01-20T23:54:29.035307520Z" level=info msg="Start recovering state" Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035451960Z" level=info msg="Start event monitor" Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035476920Z" level=info msg="Start cni network conf syncer for default" Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035484560Z" level=info msg="Start streaming server" Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035536200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035547360Z" level=info msg="runtime interface starting up..." Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035553360Z" level=info msg="starting plugins..." Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.035569720Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.036223120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.036265280Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 23:54:29.038390 containerd[1592]: time="2026-01-20T23:54:29.036318280Z" level=info msg="containerd successfully booted in 0.309762s" Jan 20 23:54:29.036475 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 23:54:29.166437 tar[1579]: linux-arm64/README.md Jan 20 23:54:29.185447 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 23:54:29.330543 sshd_keygen[1593]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 23:54:29.353530 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 23:54:29.359209 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 23:54:29.383465 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 23:54:29.383957 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 23:54:29.387676 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 23:54:29.417011 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 23:54:29.423335 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 23:54:29.427439 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 23:54:29.428282 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 23:54:29.522424 systemd-networkd[1481]: eth1: Gained IPv6LL Jan 20 23:54:29.523711 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 20 23:54:29.528067 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 23:54:29.530265 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 23:54:29.533696 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:29.538359 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 23:54:29.566681 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 23:54:29.778455 systemd-networkd[1481]: eth0: Gained IPv6LL Jan 20 23:54:29.779043 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 20 23:54:30.339578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:30.341769 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 23:54:30.343654 systemd[1]: Startup finished in 1.821s (kernel) + 5.471s (initrd) + 4.760s (userspace) = 12.053s. Jan 20 23:54:30.355051 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:54:30.474695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 23:54:30.478368 systemd[1]: Started sshd@0-188.245.60.37:22-14.63.166.251:41901.service - OpenSSH per-connection server daemon (14.63.166.251:41901). Jan 20 23:54:30.838930 kubelet[1715]: E0120 23:54:30.838881 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:54:30.842394 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:54:30.842626 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:54:30.843388 systemd[1]: kubelet.service: Consumed 858ms CPU time, 253.2M memory peak. Jan 20 23:54:31.109641 sshd[1721]: Connection closed by 14.63.166.251 port 41901 [preauth] Jan 20 23:54:31.112315 systemd[1]: sshd@0-188.245.60.37:22-14.63.166.251:41901.service: Deactivated successfully. Jan 20 23:54:40.978154 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 23:54:40.982715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:41.141468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:41.153397 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:54:41.199965 kubelet[1740]: E0120 23:54:41.199915 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:54:41.203642 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:54:41.203790 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:54:41.204505 systemd[1]: kubelet.service: Consumed 172ms CPU time, 104.4M memory peak. Jan 20 23:54:51.227910 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 23:54:51.230473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:54:51.415795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:54:51.431769 (kubelet)[1755]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:54:51.484996 kubelet[1755]: E0120 23:54:51.484847 1755 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:54:51.487987 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:54:51.488156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:54:51.488695 systemd[1]: kubelet.service: Consumed 187ms CPU time, 105.5M memory peak. Jan 20 23:55:00.675194 systemd-timesyncd[1445]: Contacted time server 31.209.85.243:123 (2.flatcar.pool.ntp.org). Jan 20 23:55:00.675338 systemd-timesyncd[1445]: Initial clock synchronization to Tue 2026-01-20 23:55:00.674871 UTC. Jan 20 23:55:00.676417 systemd-resolved[1270]: Clock change detected. Flushing caches. Jan 20 23:55:02.211283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 23:55:02.215344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:02.396333 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:02.412726 (kubelet)[1770]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:55:02.458603 kubelet[1770]: E0120 23:55:02.458559 1770 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:55:02.461716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:55:02.461892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:55:02.463202 systemd[1]: kubelet.service: Consumed 174ms CPU time, 107.1M memory peak. Jan 20 23:55:02.990342 systemd[1]: Started sshd@1-188.245.60.37:22-20.161.92.111:40592.service - OpenSSH per-connection server daemon (20.161.92.111:40592). Jan 20 23:55:03.549699 sshd[1778]: Accepted publickey for core from 20.161.92.111 port 40592 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:03.553175 sshd-session[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:03.569106 systemd-logind[1569]: New session 1 of user core. Jan 20 23:55:03.571415 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 23:55:03.573571 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 23:55:03.601469 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 23:55:03.604695 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 23:55:03.622482 (systemd)[1784]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:03.628169 systemd-logind[1569]: New session 2 of user core. Jan 20 23:55:03.758153 systemd[1784]: Queued start job for default target default.target. Jan 20 23:55:03.770237 systemd[1784]: Created slice app.slice - User Application Slice. Jan 20 23:55:03.770352 systemd[1784]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 23:55:03.770384 systemd[1784]: Reached target paths.target - Paths. Jan 20 23:55:03.770482 systemd[1784]: Reached target timers.target - Timers. Jan 20 23:55:03.772467 systemd[1784]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 23:55:03.773866 systemd[1784]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 23:55:03.795721 systemd[1784]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 23:55:03.795888 systemd[1784]: Reached target sockets.target - Sockets. Jan 20 23:55:03.797491 systemd[1784]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 23:55:03.797851 systemd[1784]: Reached target basic.target - Basic System. Jan 20 23:55:03.797915 systemd[1784]: Reached target default.target - Main User Target. Jan 20 23:55:03.797945 systemd[1784]: Startup finished in 162ms. Jan 20 23:55:03.798580 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 23:55:03.811610 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 23:55:04.124976 systemd[1]: Started sshd@2-188.245.60.37:22-20.161.92.111:40604.service - OpenSSH per-connection server daemon (20.161.92.111:40604). Jan 20 23:55:04.671990 sshd[1798]: Accepted publickey for core from 20.161.92.111 port 40604 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:04.673440 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:04.682062 systemd-logind[1569]: New session 3 of user core. Jan 20 23:55:04.685258 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 23:55:04.963064 sshd[1802]: Connection closed by 20.161.92.111 port 40604 Jan 20 23:55:04.963755 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:04.969276 systemd[1]: sshd@2-188.245.60.37:22-20.161.92.111:40604.service: Deactivated successfully. Jan 20 23:55:04.972393 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 23:55:04.973907 systemd-logind[1569]: Session 3 logged out. Waiting for processes to exit. Jan 20 23:55:04.975557 systemd-logind[1569]: Removed session 3. Jan 20 23:55:05.084836 systemd[1]: Started sshd@3-188.245.60.37:22-20.161.92.111:40614.service - OpenSSH per-connection server daemon (20.161.92.111:40614). Jan 20 23:55:05.676093 sshd[1808]: Accepted publickey for core from 20.161.92.111 port 40614 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:05.677874 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:05.685631 systemd-logind[1569]: New session 4 of user core. Jan 20 23:55:05.691567 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 23:55:05.987370 sshd[1812]: Connection closed by 20.161.92.111 port 40614 Jan 20 23:55:05.986677 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:05.992296 systemd[1]: sshd@3-188.245.60.37:22-20.161.92.111:40614.service: Deactivated successfully. Jan 20 23:55:05.992328 systemd-logind[1569]: Session 4 logged out. Waiting for processes to exit. Jan 20 23:55:05.995152 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 23:55:05.998866 systemd-logind[1569]: Removed session 4. Jan 20 23:55:06.106413 systemd[1]: Started sshd@4-188.245.60.37:22-20.161.92.111:40618.service - OpenSSH per-connection server daemon (20.161.92.111:40618). Jan 20 23:55:06.701088 sshd[1818]: Accepted publickey for core from 20.161.92.111 port 40618 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:06.702697 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:06.709615 systemd-logind[1569]: New session 5 of user core. Jan 20 23:55:06.715371 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 23:55:07.017659 sshd[1822]: Connection closed by 20.161.92.111 port 40618 Jan 20 23:55:07.016834 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:07.022763 systemd[1]: sshd@4-188.245.60.37:22-20.161.92.111:40618.service: Deactivated successfully. Jan 20 23:55:07.025537 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 23:55:07.027032 systemd-logind[1569]: Session 5 logged out. Waiting for processes to exit. Jan 20 23:55:07.029779 systemd-logind[1569]: Removed session 5. Jan 20 23:55:07.128719 systemd[1]: Started sshd@5-188.245.60.37:22-20.161.92.111:40630.service - OpenSSH per-connection server daemon (20.161.92.111:40630). Jan 20 23:55:07.667026 sshd[1828]: Accepted publickey for core from 20.161.92.111 port 40630 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:07.668938 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:07.674795 systemd-logind[1569]: New session 6 of user core. Jan 20 23:55:07.682431 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 23:55:07.871090 sudo[1833]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 23:55:07.871390 sudo[1833]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:55:07.881435 sudo[1833]: pam_unix(sudo:session): session closed for user root Jan 20 23:55:07.975579 sshd[1832]: Connection closed by 20.161.92.111 port 40630 Jan 20 23:55:07.976625 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:07.985163 systemd-logind[1569]: Session 6 logged out. Waiting for processes to exit. Jan 20 23:55:07.985845 systemd[1]: sshd@5-188.245.60.37:22-20.161.92.111:40630.service: Deactivated successfully. Jan 20 23:55:07.987976 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 23:55:07.991483 systemd-logind[1569]: Removed session 6. Jan 20 23:55:08.103612 systemd[1]: Started sshd@6-188.245.60.37:22-20.161.92.111:40646.service - OpenSSH per-connection server daemon (20.161.92.111:40646). Jan 20 23:55:08.673856 sshd[1840]: Accepted publickey for core from 20.161.92.111 port 40646 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:08.675727 sshd-session[1840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:08.682467 systemd-logind[1569]: New session 7 of user core. Jan 20 23:55:08.688394 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 23:55:08.883265 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 23:55:08.883604 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:55:08.886554 sudo[1846]: pam_unix(sudo:session): session closed for user root Jan 20 23:55:08.896907 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 23:55:08.897209 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:55:08.907418 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 23:55:08.963017 kernel: kauditd_printk_skb: 177 callbacks suppressed Jan 20 23:55:08.963213 kernel: audit: type=1305 audit(1768953308.958:222): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 23:55:08.958000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 23:55:08.963432 augenrules[1870]: No rules Jan 20 23:55:08.958000 audit[1870]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc864dd40 a2=420 a3=0 items=0 ppid=1851 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:08.968249 kernel: audit: type=1300 audit(1768953308.958:222): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc864dd40 a2=420 a3=0 items=0 ppid=1851 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:08.968731 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 23:55:08.971133 kernel: audit: type=1327 audit(1768953308.958:222): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:55:08.958000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:55:08.969102 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 23:55:08.973191 sudo[1845]: pam_unix(sudo:session): session closed for user root Jan 20 23:55:08.975434 kernel: audit: type=1130 audit(1768953308.968:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.977109 kernel: audit: type=1131 audit(1768953308.968:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.972000 audit[1845]: USER_END pid=1845 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.979215 kernel: audit: type=1106 audit(1768953308.972:225): pid=1845 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.972000 audit[1845]: CRED_DISP pid=1845 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:08.980838 kernel: audit: type=1104 audit(1768953308.972:226): pid=1845 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:09.073484 sshd[1844]: Connection closed by 20.161.92.111 port 40646 Jan 20 23:55:09.074017 sshd-session[1840]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:09.075000 audit[1840]: USER_END pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.081185 systemd[1]: sshd@6-188.245.60.37:22-20.161.92.111:40646.service: Deactivated successfully. Jan 20 23:55:09.076000 audit[1840]: CRED_DISP pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.083930 kernel: audit: type=1106 audit(1768953309.075:227): pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.084019 kernel: audit: type=1104 audit(1768953309.076:228): pid=1840 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-188.245.60.37:22-20.161.92.111:40646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:09.086765 kernel: audit: type=1131 audit(1768953309.082:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-188.245.60.37:22-20.161.92.111:40646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:09.086669 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 23:55:09.091349 systemd-logind[1569]: Session 7 logged out. Waiting for processes to exit. Jan 20 23:55:09.092617 systemd-logind[1569]: Removed session 7. Jan 20 23:55:09.185249 systemd[1]: Started sshd@7-188.245.60.37:22-20.161.92.111:40654.service - OpenSSH per-connection server daemon (20.161.92.111:40654). Jan 20 23:55:09.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-188.245.60.37:22-20.161.92.111:40654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:09.743000 audit[1879]: USER_ACCT pid=1879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.744886 sshd[1879]: Accepted publickey for core from 20.161.92.111 port 40654 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:55:09.745000 audit[1879]: CRED_ACQ pid=1879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.745000 audit[1879]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdbab76e0 a2=3 a3=0 items=0 ppid=1 pid=1879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:09.745000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:09.747145 sshd-session[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:09.752618 systemd-logind[1569]: New session 8 of user core. Jan 20 23:55:09.763416 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 23:55:09.766000 audit[1879]: USER_START pid=1879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.768000 audit[1883]: CRED_ACQ pid=1883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:09.947940 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 23:55:09.948262 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:55:09.946000 audit[1884]: USER_ACCT pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:09.947000 audit[1884]: CRED_REFR pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:09.947000 audit[1884]: USER_START pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:10.266329 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 23:55:10.288865 (dockerd)[1902]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 23:55:10.533365 dockerd[1902]: time="2026-01-20T23:55:10.532787399Z" level=info msg="Starting up" Jan 20 23:55:10.537665 dockerd[1902]: time="2026-01-20T23:55:10.537560279Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 23:55:10.550934 dockerd[1902]: time="2026-01-20T23:55:10.550881439Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 23:55:10.569942 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4209091043-merged.mount: Deactivated successfully. Jan 20 23:55:10.585697 systemd[1]: var-lib-docker-metacopy\x2dcheck3561341294-merged.mount: Deactivated successfully. Jan 20 23:55:10.599517 dockerd[1902]: time="2026-01-20T23:55:10.599422719Z" level=info msg="Loading containers: start." Jan 20 23:55:10.609088 kernel: Initializing XFRM netlink socket Jan 20 23:55:10.665000 audit[1953]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.665000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe3f9a190 a2=0 a3=0 items=0 ppid=1902 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.665000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 23:55:10.667000 audit[1955]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.667000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff5f87790 a2=0 a3=0 items=0 ppid=1902 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.667000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 23:55:10.671000 audit[1957]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.671000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc2aab570 a2=0 a3=0 items=0 ppid=1902 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.671000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 23:55:10.674000 audit[1959]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.674000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8f3f100 a2=0 a3=0 items=0 ppid=1902 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.674000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 23:55:10.676000 audit[1961]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1961 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.676000 audit[1961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2a7c030 a2=0 a3=0 items=0 ppid=1902 pid=1961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.676000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 23:55:10.679000 audit[1963]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1963 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.679000 audit[1963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff1182410 a2=0 a3=0 items=0 ppid=1902 pid=1963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.679000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:55:10.681000 audit[1965]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1965 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.681000 audit[1965]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7f7bd50 a2=0 a3=0 items=0 ppid=1902 pid=1965 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.681000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 23:55:10.684000 audit[1967]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1967 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.684000 audit[1967]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc8c21690 a2=0 a3=0 items=0 ppid=1902 pid=1967 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.684000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 23:55:10.710000 audit[1970]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.710000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffef60a550 a2=0 a3=0 items=0 ppid=1902 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 23:55:10.712000 audit[1972]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1972 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.712000 audit[1972]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdd6ec4b0 a2=0 a3=0 items=0 ppid=1902 pid=1972 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.712000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 23:55:10.714000 audit[1974]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1974 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.714000 audit[1974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffcaa31590 a2=0 a3=0 items=0 ppid=1902 pid=1974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.714000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 23:55:10.716000 audit[1976]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1976 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.716000 audit[1976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd1da67e0 a2=0 a3=0 items=0 ppid=1902 pid=1976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.716000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:55:10.718000 audit[1978]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1978 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.718000 audit[1978]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc62e2b40 a2=0 a3=0 items=0 ppid=1902 pid=1978 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.718000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 23:55:10.760000 audit[2008]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.760000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd7e08cf0 a2=0 a3=0 items=0 ppid=1902 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.760000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 23:55:10.763000 audit[2010]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.763000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc539bd00 a2=0 a3=0 items=0 ppid=1902 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 23:55:10.766000 audit[2012]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.766000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeca4b730 a2=0 a3=0 items=0 ppid=1902 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.766000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 23:55:10.768000 audit[2014]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.768000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4230360 a2=0 a3=0 items=0 ppid=1902 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.768000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 23:55:10.770000 audit[2016]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.770000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff6c8c330 a2=0 a3=0 items=0 ppid=1902 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.770000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 23:55:10.773000 audit[2018]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.773000 audit[2018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdec7b280 a2=0 a3=0 items=0 ppid=1902 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.773000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:55:10.776000 audit[2020]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.776000 audit[2020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcdb905c0 a2=0 a3=0 items=0 ppid=1902 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.776000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 23:55:10.779000 audit[2022]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.779000 audit[2022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffc02fae10 a2=0 a3=0 items=0 ppid=1902 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 23:55:10.782000 audit[2024]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.782000 audit[2024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffff972b0e0 a2=0 a3=0 items=0 ppid=1902 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 23:55:10.785000 audit[2026]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2026 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.785000 audit[2026]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc77736b0 a2=0 a3=0 items=0 ppid=1902 pid=2026 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.785000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 23:55:10.787000 audit[2028]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2028 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.787000 audit[2028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffcb1c7410 a2=0 a3=0 items=0 ppid=1902 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.787000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 23:55:10.789000 audit[2030]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2030 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.789000 audit[2030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe94fd2d0 a2=0 a3=0 items=0 ppid=1902 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.789000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:55:10.791000 audit[2032]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2032 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.791000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc4444fe0 a2=0 a3=0 items=0 ppid=1902 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 23:55:10.799000 audit[2037]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.799000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcaf77140 a2=0 a3=0 items=0 ppid=1902 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.799000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 23:55:10.801000 audit[2039]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.801000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff58e2410 a2=0 a3=0 items=0 ppid=1902 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.801000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 23:55:10.803000 audit[2041]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.803000 audit[2041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff54cdfc0 a2=0 a3=0 items=0 ppid=1902 pid=2041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.803000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 23:55:10.805000 audit[2043]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2043 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.805000 audit[2043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce5e0490 a2=0 a3=0 items=0 ppid=1902 pid=2043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 23:55:10.808000 audit[2045]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2045 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.808000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff279fe20 a2=0 a3=0 items=0 ppid=1902 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.808000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 23:55:10.810000 audit[2047]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2047 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:10.810000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffed4f7270 a2=0 a3=0 items=0 ppid=1902 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.810000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 23:55:10.835000 audit[2052]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.835000 audit[2052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffd08c49e0 a2=0 a3=0 items=0 ppid=1902 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.835000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 23:55:10.837000 audit[2054]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.837000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd6e95cb0 a2=0 a3=0 items=0 ppid=1902 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.837000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 23:55:10.847000 audit[2062]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.847000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff2594ce0 a2=0 a3=0 items=0 ppid=1902 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.847000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 23:55:10.859000 audit[2068]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.859000 audit[2068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe1534c30 a2=0 a3=0 items=0 ppid=1902 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 23:55:10.862000 audit[2070]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.862000 audit[2070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffe92bf680 a2=0 a3=0 items=0 ppid=1902 pid=2070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.862000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 23:55:10.865000 audit[2072]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2072 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.865000 audit[2072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe4fe1e10 a2=0 a3=0 items=0 ppid=1902 pid=2072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.865000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 23:55:10.868000 audit[2074]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2074 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.868000 audit[2074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcaa61800 a2=0 a3=0 items=0 ppid=1902 pid=2074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.868000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 23:55:10.873000 audit[2076]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:10.873000 audit[2076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd28bc5e0 a2=0 a3=0 items=0 ppid=1902 pid=2076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:10.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 23:55:10.874903 systemd-networkd[1481]: docker0: Link UP Jan 20 23:55:10.881758 dockerd[1902]: time="2026-01-20T23:55:10.881657479Z" level=info msg="Loading containers: done." Jan 20 23:55:10.910379 dockerd[1902]: time="2026-01-20T23:55:10.910223519Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 23:55:10.911022 dockerd[1902]: time="2026-01-20T23:55:10.910627239Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 23:55:10.911022 dockerd[1902]: time="2026-01-20T23:55:10.910823159Z" level=info msg="Initializing buildkit" Jan 20 23:55:10.938394 dockerd[1902]: time="2026-01-20T23:55:10.938354039Z" level=info msg="Completed buildkit initialization" Jan 20 23:55:10.946443 dockerd[1902]: time="2026-01-20T23:55:10.946389199Z" level=info msg="Daemon has completed initialization" Jan 20 23:55:10.946998 dockerd[1902]: time="2026-01-20T23:55:10.946537639Z" level=info msg="API listen on /run/docker.sock" Jan 20 23:55:10.946850 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 23:55:10.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:11.954947 containerd[1592]: time="2026-01-20T23:55:11.954890679Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 20 23:55:12.488221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 23:55:12.492245 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:12.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:12.651575 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:12.661549 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:55:12.706384 kubelet[2125]: E0120 23:55:12.706322 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:55:12.708764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:55:12.708897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:55:12.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:55:12.709665 systemd[1]: kubelet.service: Consumed 162ms CPU time, 106.8M memory peak. Jan 20 23:55:12.765092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631145229.mount: Deactivated successfully. Jan 20 23:55:13.461090 containerd[1592]: time="2026-01-20T23:55:13.460987199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:13.463817 containerd[1592]: time="2026-01-20T23:55:13.463672159Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24846113" Jan 20 23:55:13.465086 containerd[1592]: time="2026-01-20T23:55:13.464724359Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:13.469623 containerd[1592]: time="2026-01-20T23:55:13.469395039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:13.470772 containerd[1592]: time="2026-01-20T23:55:13.470222039Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.51528824s" Jan 20 23:55:13.470772 containerd[1592]: time="2026-01-20T23:55:13.470294119Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 20 23:55:13.471088 containerd[1592]: time="2026-01-20T23:55:13.471009679Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 20 23:55:13.774208 update_engine[1572]: I20260120 23:55:13.773164 1572 update_attempter.cc:509] Updating boot flags... Jan 20 23:55:14.682062 containerd[1592]: time="2026-01-20T23:55:14.681999759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:14.684304 containerd[1592]: time="2026-01-20T23:55:14.684213399Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 20 23:55:14.685210 containerd[1592]: time="2026-01-20T23:55:14.685079399Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:14.690059 containerd[1592]: time="2026-01-20T23:55:14.688990799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:14.690059 containerd[1592]: time="2026-01-20T23:55:14.689939679Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.21886784s" Jan 20 23:55:14.690059 containerd[1592]: time="2026-01-20T23:55:14.689971279Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 20 23:55:14.690614 containerd[1592]: time="2026-01-20T23:55:14.690534919Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 20 23:55:15.843740 containerd[1592]: time="2026-01-20T23:55:15.843632639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:15.846206 containerd[1592]: time="2026-01-20T23:55:15.846016959Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 20 23:55:15.847552 containerd[1592]: time="2026-01-20T23:55:15.847493559Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:15.852415 containerd[1592]: time="2026-01-20T23:55:15.851684919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:15.854471 containerd[1592]: time="2026-01-20T23:55:15.854230719Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.1635226s" Jan 20 23:55:15.854471 containerd[1592]: time="2026-01-20T23:55:15.854314759Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 20 23:55:15.855272 containerd[1592]: time="2026-01-20T23:55:15.855142999Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 20 23:55:16.876816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3166311778.mount: Deactivated successfully. Jan 20 23:55:17.255904 containerd[1592]: time="2026-01-20T23:55:17.255785359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:17.258090 containerd[1592]: time="2026-01-20T23:55:17.257751799Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 20 23:55:17.259390 containerd[1592]: time="2026-01-20T23:55:17.259318119Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:17.261713 containerd[1592]: time="2026-01-20T23:55:17.261492319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:17.262104 containerd[1592]: time="2026-01-20T23:55:17.262072559Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.4066816s" Jan 20 23:55:17.262180 containerd[1592]: time="2026-01-20T23:55:17.262104319Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 20 23:55:17.262963 containerd[1592]: time="2026-01-20T23:55:17.262936359Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 20 23:55:17.857964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount25892392.mount: Deactivated successfully. Jan 20 23:55:18.480409 containerd[1592]: time="2026-01-20T23:55:18.480147239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:18.481975 containerd[1592]: time="2026-01-20T23:55:18.481911399Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Jan 20 23:55:18.483196 containerd[1592]: time="2026-01-20T23:55:18.483149079Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:18.487063 containerd[1592]: time="2026-01-20T23:55:18.486347439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:18.487692 containerd[1592]: time="2026-01-20T23:55:18.487534439Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.22438008s" Jan 20 23:55:18.487692 containerd[1592]: time="2026-01-20T23:55:18.487581039Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 20 23:55:18.488389 containerd[1592]: time="2026-01-20T23:55:18.488368359Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 23:55:19.054137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67460271.mount: Deactivated successfully. Jan 20 23:55:19.059950 containerd[1592]: time="2026-01-20T23:55:19.059886799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:55:19.062182 containerd[1592]: time="2026-01-20T23:55:19.062111799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 23:55:19.063053 containerd[1592]: time="2026-01-20T23:55:19.062977279Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:55:19.067022 containerd[1592]: time="2026-01-20T23:55:19.066097319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:55:19.067022 containerd[1592]: time="2026-01-20T23:55:19.066841679Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 578.34636ms" Jan 20 23:55:19.067022 containerd[1592]: time="2026-01-20T23:55:19.066876519Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 23:55:19.067809 containerd[1592]: time="2026-01-20T23:55:19.067770479Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 20 23:55:19.557087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount251914659.mount: Deactivated successfully. Jan 20 23:55:21.487920 containerd[1592]: time="2026-01-20T23:55:21.487229559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:21.488768 containerd[1592]: time="2026-01-20T23:55:21.488703959Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=56456774" Jan 20 23:55:21.489884 containerd[1592]: time="2026-01-20T23:55:21.489842799Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:21.493197 containerd[1592]: time="2026-01-20T23:55:21.493145159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:21.495453 containerd[1592]: time="2026-01-20T23:55:21.495395719Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.42746428s" Jan 20 23:55:21.495848 containerd[1592]: time="2026-01-20T23:55:21.495635199Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 20 23:55:22.711120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 20 23:55:22.715284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:22.865285 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:22.869485 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 20 23:55:22.869570 kernel: audit: type=1130 audit(1768953322.864:282): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:22.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:22.875593 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:55:22.927056 kubelet[2357]: E0120 23:55:22.923970 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:55:22.928313 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:55:22.928442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:55:22.929004 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107M memory peak. Jan 20 23:55:22.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:55:22.932068 kernel: audit: type=1131 audit(1768953322.927:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:55:27.165768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:27.165927 systemd[1]: kubelet.service: Consumed 159ms CPU time, 107M memory peak. Jan 20 23:55:27.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.164000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.171556 kernel: audit: type=1130 audit(1768953327.164:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.171678 kernel: audit: type=1131 audit(1768953327.164:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.169332 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:27.205221 systemd[1]: Reload requested from client PID 2372 ('systemctl') (unit session-8.scope)... Jan 20 23:55:27.205407 systemd[1]: Reloading... Jan 20 23:55:27.333217 zram_generator::config[2419]: No configuration found. Jan 20 23:55:27.556140 systemd[1]: Reloading finished in 350 ms. Jan 20 23:55:27.592315 kernel: audit: type=1334 audit(1768953327.586:286): prog-id=61 op=LOAD Jan 20 23:55:27.592406 kernel: audit: type=1334 audit(1768953327.586:287): prog-id=58 op=UNLOAD Jan 20 23:55:27.592428 kernel: audit: type=1334 audit(1768953327.586:288): prog-id=62 op=LOAD Jan 20 23:55:27.592447 kernel: audit: type=1334 audit(1768953327.586:289): prog-id=63 op=LOAD Jan 20 23:55:27.592467 kernel: audit: type=1334 audit(1768953327.586:290): prog-id=59 op=UNLOAD Jan 20 23:55:27.586000 audit: BPF prog-id=61 op=LOAD Jan 20 23:55:27.586000 audit: BPF prog-id=58 op=UNLOAD Jan 20 23:55:27.586000 audit: BPF prog-id=62 op=LOAD Jan 20 23:55:27.586000 audit: BPF prog-id=63 op=LOAD Jan 20 23:55:27.586000 audit: BPF prog-id=59 op=UNLOAD Jan 20 23:55:27.586000 audit: BPF prog-id=60 op=UNLOAD Jan 20 23:55:27.589000 audit: BPF prog-id=64 op=LOAD Jan 20 23:55:27.589000 audit: BPF prog-id=51 op=UNLOAD Jan 20 23:55:27.589000 audit: BPF prog-id=65 op=LOAD Jan 20 23:55:27.590000 audit: BPF prog-id=66 op=LOAD Jan 20 23:55:27.590000 audit: BPF prog-id=52 op=UNLOAD Jan 20 23:55:27.590000 audit: BPF prog-id=53 op=UNLOAD Jan 20 23:55:27.591000 audit: BPF prog-id=67 op=LOAD Jan 20 23:55:27.591000 audit: BPF prog-id=57 op=UNLOAD Jan 20 23:55:27.594061 kernel: audit: type=1334 audit(1768953327.586:291): prog-id=60 op=UNLOAD Jan 20 23:55:27.593000 audit: BPF prog-id=68 op=LOAD Jan 20 23:55:27.593000 audit: BPF prog-id=48 op=UNLOAD Jan 20 23:55:27.593000 audit: BPF prog-id=69 op=LOAD Jan 20 23:55:27.593000 audit: BPF prog-id=70 op=LOAD Jan 20 23:55:27.593000 audit: BPF prog-id=49 op=UNLOAD Jan 20 23:55:27.593000 audit: BPF prog-id=50 op=UNLOAD Jan 20 23:55:27.594000 audit: BPF prog-id=71 op=LOAD Jan 20 23:55:27.594000 audit: BPF prog-id=56 op=UNLOAD Jan 20 23:55:27.594000 audit: BPF prog-id=72 op=LOAD Jan 20 23:55:27.594000 audit: BPF prog-id=73 op=LOAD Jan 20 23:55:27.594000 audit: BPF prog-id=54 op=UNLOAD Jan 20 23:55:27.594000 audit: BPF prog-id=55 op=UNLOAD Jan 20 23:55:27.596000 audit: BPF prog-id=74 op=LOAD Jan 20 23:55:27.596000 audit: BPF prog-id=47 op=UNLOAD Jan 20 23:55:27.597000 audit: BPF prog-id=75 op=LOAD Jan 20 23:55:27.597000 audit: BPF prog-id=44 op=UNLOAD Jan 20 23:55:27.597000 audit: BPF prog-id=76 op=LOAD Jan 20 23:55:27.597000 audit: BPF prog-id=77 op=LOAD Jan 20 23:55:27.597000 audit: BPF prog-id=45 op=UNLOAD Jan 20 23:55:27.597000 audit: BPF prog-id=46 op=UNLOAD Jan 20 23:55:27.598000 audit: BPF prog-id=78 op=LOAD Jan 20 23:55:27.598000 audit: BPF prog-id=41 op=UNLOAD Jan 20 23:55:27.598000 audit: BPF prog-id=79 op=LOAD Jan 20 23:55:27.598000 audit: BPF prog-id=80 op=LOAD Jan 20 23:55:27.598000 audit: BPF prog-id=42 op=UNLOAD Jan 20 23:55:27.598000 audit: BPF prog-id=43 op=UNLOAD Jan 20 23:55:27.623856 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 23:55:27.623968 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 23:55:27.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:55:27.625123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:27.625194 systemd[1]: kubelet.service: Consumed 109ms CPU time, 95.1M memory peak. Jan 20 23:55:27.627317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:27.793585 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:27.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.807754 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 23:55:27.853248 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:55:27.854068 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 23:55:27.854068 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:55:27.854068 kubelet[2467]: I0120 23:55:27.853691 2467 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 23:55:29.645739 kubelet[2467]: I0120 23:55:29.645654 2467 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 23:55:29.645739 kubelet[2467]: I0120 23:55:29.645699 2467 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 23:55:29.646453 kubelet[2467]: I0120 23:55:29.646005 2467 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 23:55:29.675455 kubelet[2467]: E0120 23:55:29.675402 2467 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://188.245.60.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.60.37:6443: connect: connection refused" logger="UnhandledError" Jan 20 23:55:29.677447 kubelet[2467]: I0120 23:55:29.677337 2467 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:55:29.688445 kubelet[2467]: I0120 23:55:29.688341 2467 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 23:55:29.692349 kubelet[2467]: I0120 23:55:29.692093 2467 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 23:55:29.692445 kubelet[2467]: I0120 23:55:29.692411 2467 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 23:55:29.692691 kubelet[2467]: I0120 23:55:29.692451 2467 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-f640cc67e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 23:55:29.692780 kubelet[2467]: I0120 23:55:29.692769 2467 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 23:55:29.692811 kubelet[2467]: I0120 23:55:29.692785 2467 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 23:55:29.693045 kubelet[2467]: I0120 23:55:29.693018 2467 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:55:29.697287 kubelet[2467]: I0120 23:55:29.697241 2467 kubelet.go:446] "Attempting to sync node with API server" Jan 20 23:55:29.697287 kubelet[2467]: I0120 23:55:29.697276 2467 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 23:55:29.698834 kubelet[2467]: I0120 23:55:29.698796 2467 kubelet.go:352] "Adding apiserver pod source" Jan 20 23:55:29.698834 kubelet[2467]: I0120 23:55:29.698829 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 23:55:29.701279 kubelet[2467]: W0120 23:55:29.701205 2467 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://188.245.60.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-n-f640cc67e1&limit=500&resourceVersion=0": dial tcp 188.245.60.37:6443: connect: connection refused Jan 20 23:55:29.701472 kubelet[2467]: E0120 23:55:29.701443 2467 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://188.245.60.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-n-f640cc67e1&limit=500&resourceVersion=0\": dial tcp 188.245.60.37:6443: connect: connection refused" logger="UnhandledError" Jan 20 23:55:29.701953 kubelet[2467]: I0120 23:55:29.701928 2467 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 23:55:29.703105 kubelet[2467]: I0120 23:55:29.702873 2467 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 23:55:29.703105 kubelet[2467]: W0120 23:55:29.703008 2467 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 23:55:29.706768 kubelet[2467]: I0120 23:55:29.706728 2467 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 23:55:29.707103 kubelet[2467]: I0120 23:55:29.707019 2467 server.go:1287] "Started kubelet" Jan 20 23:55:29.712734 kubelet[2467]: W0120 23:55:29.712662 2467 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://188.245.60.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 188.245.60.37:6443: connect: connection refused Jan 20 23:55:29.712831 kubelet[2467]: E0120 23:55:29.712737 2467 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://188.245.60.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.60.37:6443: connect: connection refused" logger="UnhandledError" Jan 20 23:55:29.713411 kubelet[2467]: E0120 23:55:29.713103 2467 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.60.37:6443/api/v1/namespaces/default/events\": dial tcp 188.245.60.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-n-f640cc67e1.188c95ae596d46c7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-f640cc67e1,UID:ci-4547-0-0-n-f640cc67e1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-f640cc67e1,},FirstTimestamp:2026-01-20 23:55:29.706944199 +0000 UTC m=+1.893845041,LastTimestamp:2026-01-20 23:55:29.706944199 +0000 UTC m=+1.893845041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-f640cc67e1,}" Jan 20 23:55:29.715028 kubelet[2467]: I0120 23:55:29.714951 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 23:55:29.715406 kubelet[2467]: I0120 23:55:29.715377 2467 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 23:55:29.715499 kubelet[2467]: I0120 23:55:29.715465 2467 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 23:55:29.716461 kubelet[2467]: I0120 23:55:29.716424 2467 server.go:479] "Adding debug handlers to kubelet server" Jan 20 23:55:29.716833 kubelet[2467]: I0120 23:55:29.716812 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 23:55:29.718782 kubelet[2467]: I0120 23:55:29.718749 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 23:55:29.719000 audit[2478]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.721469 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 20 23:55:29.721529 kernel: audit: type=1325 audit(1768953329.719:328): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.719000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd55e0320 a2=0 a3=0 items=0 ppid=2467 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.724548 kernel: audit: type=1300 audit(1768953329.719:328): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd55e0320 a2=0 a3=0 items=0 ppid=2467 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.724737 kubelet[2467]: I0120 23:55:29.724716 2467 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 23:55:29.725440 kubelet[2467]: E0120 23:55:29.725412 2467 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" Jan 20 23:55:29.719000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 23:55:29.727530 kernel: audit: type=1327 audit(1768953329.719:328): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 23:55:29.727596 kernel: audit: type=1325 audit(1768953329.722:329): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.722000 audit[2479]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.728154 kubelet[2467]: I0120 23:55:29.728127 2467 factory.go:221] Registration of the systemd container factory successfully Jan 20 23:55:29.728257 kubelet[2467]: I0120 23:55:29.728233 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 23:55:29.729117 kernel: audit: type=1300 audit(1768953329.722:329): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe996b4f0 a2=0 a3=0 items=0 ppid=2467 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.722000 audit[2479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe996b4f0 a2=0 a3=0 items=0 ppid=2467 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.731268 kubelet[2467]: I0120 23:55:29.731246 2467 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 23:55:29.731719 kubelet[2467]: I0120 23:55:29.731575 2467 reconciler.go:26] "Reconciler: start to sync state" Jan 20 23:55:29.731826 kubelet[2467]: E0120 23:55:29.731757 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.60.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-f640cc67e1?timeout=10s\": dial tcp 188.245.60.37:6443: connect: connection refused" interval="200ms" Jan 20 23:55:29.732257 kubelet[2467]: I0120 23:55:29.732226 2467 factory.go:221] Registration of the containerd container factory successfully Jan 20 23:55:29.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 23:55:29.727000 audit[2481]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.735064 kernel: audit: type=1327 audit(1768953329.722:329): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 23:55:29.735114 kernel: audit: type=1325 audit(1768953329.727:330): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.727000 audit[2481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffebfea0b0 a2=0 a3=0 items=0 ppid=2467 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.737984 kernel: audit: type=1300 audit(1768953329.727:330): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffebfea0b0 a2=0 a3=0 items=0 ppid=2467 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.739521 kernel: audit: type=1327 audit(1768953329.727:330): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:55:29.739567 kernel: audit: type=1325 audit(1768953329.731:331): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:55:29.731000 audit[2483]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.731000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffeb17d8b0 a2=0 a3=0 items=0 ppid=2467 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:55:29.742715 kubelet[2467]: W0120 23:55:29.742662 2467 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://188.245.60.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 188.245.60.37:6443: connect: connection refused Jan 20 23:55:29.745062 kubelet[2467]: E0120 23:55:29.744408 2467 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://188.245.60.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.60.37:6443: connect: connection refused" logger="UnhandledError" Jan 20 23:55:29.746000 audit[2486]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.746000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe23be250 a2=0 a3=0 items=0 ppid=2467 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 23:55:29.749450 kubelet[2467]: I0120 23:55:29.749389 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 23:55:29.749000 audit[2488]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:29.749000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcb2f1d00 a2=0 a3=0 items=0 ppid=2467 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.749000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 23:55:29.750822 kubelet[2467]: I0120 23:55:29.750776 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 23:55:29.750822 kubelet[2467]: I0120 23:55:29.750812 2467 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 23:55:29.750908 kubelet[2467]: I0120 23:55:29.750843 2467 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 23:55:29.750908 kubelet[2467]: I0120 23:55:29.750852 2467 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 23:55:29.750960 kubelet[2467]: E0120 23:55:29.750914 2467 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 23:55:29.750000 audit[2489]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.750000 audit[2489]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd8c07c70 a2=0 a3=0 items=0 ppid=2467 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 23:55:29.753000 audit[2490]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.753000 audit[2490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd091a2e0 a2=0 a3=0 items=0 ppid=2467 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 23:55:29.754000 audit[2491]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:29.754000 audit[2491]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcf88da00 a2=0 a3=0 items=0 ppid=2467 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 23:55:29.756000 audit[2492]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:29.756000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5ee30a0 a2=0 a3=0 items=0 ppid=2467 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.756000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 23:55:29.762116 kubelet[2467]: E0120 23:55:29.762073 2467 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 23:55:29.762679 kubelet[2467]: W0120 23:55:29.762594 2467 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://188.245.60.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 188.245.60.37:6443: connect: connection refused Jan 20 23:55:29.762756 kubelet[2467]: E0120 23:55:29.762686 2467 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://188.245.60.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.60.37:6443: connect: connection refused" logger="UnhandledError" Jan 20 23:55:29.761000 audit[2493]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:29.761000 audit[2493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd39d1900 a2=0 a3=0 items=0 ppid=2467 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.761000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 23:55:29.764000 audit[2496]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:29.764000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc754ab60 a2=0 a3=0 items=0 ppid=2467 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:29.764000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 23:55:29.772299 kubelet[2467]: I0120 23:55:29.772247 2467 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 23:55:29.772404 kubelet[2467]: I0120 23:55:29.772312 2467 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 23:55:29.772404 kubelet[2467]: I0120 23:55:29.772334 2467 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:55:29.775517 kubelet[2467]: I0120 23:55:29.775449 2467 policy_none.go:49] "None policy: Start" Jan 20 23:55:29.775517 kubelet[2467]: I0120 23:55:29.775519 2467 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 23:55:29.775652 kubelet[2467]: I0120 23:55:29.775547 2467 state_mem.go:35] "Initializing new in-memory state store" Jan 20 23:55:29.783204 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 23:55:29.802687 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 23:55:29.807852 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 23:55:29.819079 kubelet[2467]: I0120 23:55:29.818810 2467 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 23:55:29.819197 kubelet[2467]: I0120 23:55:29.819105 2467 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 23:55:29.819197 kubelet[2467]: I0120 23:55:29.819124 2467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 23:55:29.820872 kubelet[2467]: I0120 23:55:29.820811 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 23:55:29.822586 kubelet[2467]: E0120 23:55:29.822527 2467 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 23:55:29.822586 kubelet[2467]: E0120 23:55:29.822588 2467 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-n-f640cc67e1\" not found" Jan 20 23:55:29.865945 systemd[1]: Created slice kubepods-burstable-pod1ebd7f09efce7aeeaef6aff5a8d09ec8.slice - libcontainer container kubepods-burstable-pod1ebd7f09efce7aeeaef6aff5a8d09ec8.slice. Jan 20 23:55:29.887707 kubelet[2467]: E0120 23:55:29.887461 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.891616 systemd[1]: Created slice kubepods-burstable-podee8055c904db73e56bf50c1df7a9426e.slice - libcontainer container kubepods-burstable-podee8055c904db73e56bf50c1df7a9426e.slice. Jan 20 23:55:29.895277 kubelet[2467]: E0120 23:55:29.895138 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.898279 systemd[1]: Created slice kubepods-burstable-pod496b5dbb2f91bd085f1ca33c452062c2.slice - libcontainer container kubepods-burstable-pod496b5dbb2f91bd085f1ca33c452062c2.slice. Jan 20 23:55:29.901942 kubelet[2467]: E0120 23:55:29.901777 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.922326 kubelet[2467]: I0120 23:55:29.922286 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.923339 kubelet[2467]: E0120 23:55:29.923281 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.60.37:6443/api/v1/nodes\": dial tcp 188.245.60.37:6443: connect: connection refused" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933077 kubelet[2467]: I0120 23:55:29.932894 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933077 kubelet[2467]: I0120 23:55:29.932955 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933077 kubelet[2467]: I0120 23:55:29.932994 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/496b5dbb2f91bd085f1ca33c452062c2-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-f640cc67e1\" (UID: \"496b5dbb2f91bd085f1ca33c452062c2\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933602 kubelet[2467]: E0120 23:55:29.933557 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.60.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-f640cc67e1?timeout=10s\": dial tcp 188.245.60.37:6443: connect: connection refused" interval="400ms" Jan 20 23:55:29.933690 kubelet[2467]: I0120 23:55:29.933027 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ebd7f09efce7aeeaef6aff5a8d09ec8-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" (UID: \"1ebd7f09efce7aeeaef6aff5a8d09ec8\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933767 kubelet[2467]: I0120 23:55:29.933706 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ebd7f09efce7aeeaef6aff5a8d09ec8-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" (UID: \"1ebd7f09efce7aeeaef6aff5a8d09ec8\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933767 kubelet[2467]: I0120 23:55:29.933746 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933875 kubelet[2467]: I0120 23:55:29.933786 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ebd7f09efce7aeeaef6aff5a8d09ec8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" (UID: \"1ebd7f09efce7aeeaef6aff5a8d09ec8\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933875 kubelet[2467]: I0120 23:55:29.933823 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:29.933983 kubelet[2467]: I0120 23:55:29.933888 2467 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.126719 kubelet[2467]: I0120 23:55:30.126664 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.127305 kubelet[2467]: E0120 23:55:30.127243 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.60.37:6443/api/v1/nodes\": dial tcp 188.245.60.37:6443: connect: connection refused" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.189241 containerd[1592]: time="2026-01-20T23:55:30.189091079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-f640cc67e1,Uid:1ebd7f09efce7aeeaef6aff5a8d09ec8,Namespace:kube-system,Attempt:0,}" Jan 20 23:55:30.198165 containerd[1592]: time="2026-01-20T23:55:30.198093439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-f640cc67e1,Uid:ee8055c904db73e56bf50c1df7a9426e,Namespace:kube-system,Attempt:0,}" Jan 20 23:55:30.204245 containerd[1592]: time="2026-01-20T23:55:30.204121799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-f640cc67e1,Uid:496b5dbb2f91bd085f1ca33c452062c2,Namespace:kube-system,Attempt:0,}" Jan 20 23:55:30.217477 containerd[1592]: time="2026-01-20T23:55:30.217430199Z" level=info msg="connecting to shim af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba" address="unix:///run/containerd/s/b72197d95a3262be40d82aab7baaa52e4c6d29ce2f26e29571d1b37531703904" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:55:30.250706 containerd[1592]: time="2026-01-20T23:55:30.250359719Z" level=info msg="connecting to shim a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50" address="unix:///run/containerd/s/0a392a4317137361e395446fd0f298e2101737f352eec6b22d34ab79e174a078" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:55:30.260315 systemd[1]: Started cri-containerd-af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba.scope - libcontainer container af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba. Jan 20 23:55:30.271682 containerd[1592]: time="2026-01-20T23:55:30.271635679Z" level=info msg="connecting to shim e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a" address="unix:///run/containerd/s/ad9235dbd93b4220359bae3c8d620aa81efeb9babc60be5ffe8831ed052ff4f1" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:55:30.277000 audit: BPF prog-id=81 op=LOAD Jan 20 23:55:30.278000 audit: BPF prog-id=82 op=LOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.278000 audit: BPF prog-id=82 op=UNLOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.278000 audit: BPF prog-id=83 op=LOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.278000 audit: BPF prog-id=84 op=LOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.278000 audit: BPF prog-id=84 op=UNLOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.278000 audit: BPF prog-id=83 op=UNLOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.278000 audit: BPF prog-id=85 op=LOAD Jan 20 23:55:30.278000 audit[2518]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2506 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.278000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6166333033613736366430363338656666313934623230636133346433 Jan 20 23:55:30.298335 systemd[1]: Started cri-containerd-a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50.scope - libcontainer container a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50. Jan 20 23:55:30.318000 audit: BPF prog-id=86 op=LOAD Jan 20 23:55:30.318000 audit: BPF prog-id=87 op=LOAD Jan 20 23:55:30.318000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c180 a2=98 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.318000 audit: BPF prog-id=87 op=UNLOAD Jan 20 23:55:30.318000 audit[2556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.318000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.319000 audit: BPF prog-id=88 op=LOAD Jan 20 23:55:30.319000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c3e8 a2=98 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.319000 audit: BPF prog-id=89 op=LOAD Jan 20 23:55:30.319000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400020c168 a2=98 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.319000 audit: BPF prog-id=89 op=UNLOAD Jan 20 23:55:30.319000 audit[2556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.319000 audit: BPF prog-id=88 op=UNLOAD Jan 20 23:55:30.319000 audit[2556]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.319000 audit: BPF prog-id=90 op=LOAD Jan 20 23:55:30.319000 audit[2556]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400020c648 a2=98 a3=0 items=0 ppid=2537 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.319000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6138346438656562383034373966643831643064343639643831383038 Jan 20 23:55:30.325361 systemd[1]: Started cri-containerd-e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a.scope - libcontainer container e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a. Jan 20 23:55:30.335275 kubelet[2467]: E0120 23:55:30.335220 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.60.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-f640cc67e1?timeout=10s\": dial tcp 188.245.60.37:6443: connect: connection refused" interval="800ms" Jan 20 23:55:30.336273 containerd[1592]: time="2026-01-20T23:55:30.335560319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-f640cc67e1,Uid:1ebd7f09efce7aeeaef6aff5a8d09ec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba\"" Jan 20 23:55:30.341410 containerd[1592]: time="2026-01-20T23:55:30.341352959Z" level=info msg="CreateContainer within sandbox \"af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 23:55:30.354000 audit: BPF prog-id=91 op=LOAD Jan 20 23:55:30.355000 audit: BPF prog-id=92 op=LOAD Jan 20 23:55:30.355000 audit[2583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.355000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.356000 audit: BPF prog-id=92 op=UNLOAD Jan 20 23:55:30.356000 audit[2583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.356000 audit: BPF prog-id=93 op=LOAD Jan 20 23:55:30.356000 audit[2583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.356000 audit: BPF prog-id=94 op=LOAD Jan 20 23:55:30.356000 audit[2583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.356000 audit: BPF prog-id=94 op=UNLOAD Jan 20 23:55:30.356000 audit[2583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.356000 audit: BPF prog-id=93 op=UNLOAD Jan 20 23:55:30.356000 audit[2583]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.356000 audit: BPF prog-id=95 op=LOAD Jan 20 23:55:30.356000 audit[2583]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2558 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.356000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6536373233643536353036623838323230616532316162633135383965 Jan 20 23:55:30.366532 containerd[1592]: time="2026-01-20T23:55:30.366484719Z" level=info msg="Container a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:55:30.372921 containerd[1592]: time="2026-01-20T23:55:30.372880279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-f640cc67e1,Uid:ee8055c904db73e56bf50c1df7a9426e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50\"" Jan 20 23:55:30.377731 containerd[1592]: time="2026-01-20T23:55:30.377689319Z" level=info msg="CreateContainer within sandbox \"a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 23:55:30.378923 containerd[1592]: time="2026-01-20T23:55:30.378885879Z" level=info msg="CreateContainer within sandbox \"af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6\"" Jan 20 23:55:30.383307 containerd[1592]: time="2026-01-20T23:55:30.383253359Z" level=info msg="StartContainer for \"a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6\"" Jan 20 23:55:30.386501 containerd[1592]: time="2026-01-20T23:55:30.386459439Z" level=info msg="connecting to shim a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6" address="unix:///run/containerd/s/b72197d95a3262be40d82aab7baaa52e4c6d29ce2f26e29571d1b37531703904" protocol=ttrpc version=3 Jan 20 23:55:30.392695 containerd[1592]: time="2026-01-20T23:55:30.391891999Z" level=info msg="Container 256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:55:30.409340 systemd[1]: Started cri-containerd-a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6.scope - libcontainer container a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6. Jan 20 23:55:30.412939 containerd[1592]: time="2026-01-20T23:55:30.412904479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-f640cc67e1,Uid:496b5dbb2f91bd085f1ca33c452062c2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a\"" Jan 20 23:55:30.415773 containerd[1592]: time="2026-01-20T23:55:30.415731519Z" level=info msg="CreateContainer within sandbox \"a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900\"" Jan 20 23:55:30.417020 containerd[1592]: time="2026-01-20T23:55:30.416992799Z" level=info msg="StartContainer for \"256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900\"" Jan 20 23:55:30.419993 containerd[1592]: time="2026-01-20T23:55:30.419951519Z" level=info msg="CreateContainer within sandbox \"e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 23:55:30.423246 containerd[1592]: time="2026-01-20T23:55:30.423215039Z" level=info msg="connecting to shim 256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900" address="unix:///run/containerd/s/0a392a4317137361e395446fd0f298e2101737f352eec6b22d34ab79e174a078" protocol=ttrpc version=3 Jan 20 23:55:30.427000 audit: BPF prog-id=96 op=LOAD Jan 20 23:55:30.428000 audit: BPF prog-id=97 op=LOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.428000 audit: BPF prog-id=97 op=UNLOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.428000 audit: BPF prog-id=98 op=LOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.428000 audit: BPF prog-id=99 op=LOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.428000 audit: BPF prog-id=99 op=UNLOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.428000 audit: BPF prog-id=98 op=UNLOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.428000 audit: BPF prog-id=100 op=LOAD Jan 20 23:55:30.428000 audit[2626]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2506 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.428000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393637386135396633366135346661656565643661363765626134 Jan 20 23:55:30.438735 containerd[1592]: time="2026-01-20T23:55:30.438695199Z" level=info msg="Container d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:55:30.456658 containerd[1592]: time="2026-01-20T23:55:30.456557639Z" level=info msg="CreateContainer within sandbox \"e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b\"" Jan 20 23:55:30.458007 systemd[1]: Started cri-containerd-256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900.scope - libcontainer container 256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900. Jan 20 23:55:30.460202 containerd[1592]: time="2026-01-20T23:55:30.458323999Z" level=info msg="StartContainer for \"d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b\"" Jan 20 23:55:30.461560 containerd[1592]: time="2026-01-20T23:55:30.461528159Z" level=info msg="connecting to shim d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b" address="unix:///run/containerd/s/ad9235dbd93b4220359bae3c8d620aa81efeb9babc60be5ffe8831ed052ff4f1" protocol=ttrpc version=3 Jan 20 23:55:30.472241 containerd[1592]: time="2026-01-20T23:55:30.472200919Z" level=info msg="StartContainer for \"a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6\" returns successfully" Jan 20 23:55:30.485330 systemd[1]: Started cri-containerd-d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b.scope - libcontainer container d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b. Jan 20 23:55:30.489000 audit: BPF prog-id=101 op=LOAD Jan 20 23:55:30.489000 audit: BPF prog-id=102 op=LOAD Jan 20 23:55:30.489000 audit[2653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.489000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.490000 audit: BPF prog-id=102 op=UNLOAD Jan 20 23:55:30.490000 audit[2653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.490000 audit: BPF prog-id=103 op=LOAD Jan 20 23:55:30.490000 audit[2653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.490000 audit: BPF prog-id=104 op=LOAD Jan 20 23:55:30.490000 audit[2653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.490000 audit: BPF prog-id=104 op=UNLOAD Jan 20 23:55:30.490000 audit[2653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.490000 audit: BPF prog-id=103 op=UNLOAD Jan 20 23:55:30.490000 audit[2653]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.490000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.491000 audit: BPF prog-id=105 op=LOAD Jan 20 23:55:30.491000 audit[2653]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2537 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.491000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235366539396338333539373330373565336538663834323234636131 Jan 20 23:55:30.522000 audit: BPF prog-id=106 op=LOAD Jan 20 23:55:30.525000 audit: BPF prog-id=107 op=LOAD Jan 20 23:55:30.525000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.525000 audit: BPF prog-id=107 op=UNLOAD Jan 20 23:55:30.525000 audit[2674]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.525000 audit: BPF prog-id=108 op=LOAD Jan 20 23:55:30.525000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.525000 audit: BPF prog-id=109 op=LOAD Jan 20 23:55:30.525000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.525000 audit: BPF prog-id=109 op=UNLOAD Jan 20 23:55:30.525000 audit[2674]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.525000 audit: BPF prog-id=108 op=UNLOAD Jan 20 23:55:30.525000 audit[2674]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.526000 audit: BPF prog-id=110 op=LOAD Jan 20 23:55:30.526000 audit[2674]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2558 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:30.526000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431356364343239386366353432653834373735366536613965306139 Jan 20 23:55:30.531358 kubelet[2467]: I0120 23:55:30.531323 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.533150 kubelet[2467]: E0120 23:55:30.533111 2467 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.60.37:6443/api/v1/nodes\": dial tcp 188.245.60.37:6443: connect: connection refused" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.544364 containerd[1592]: time="2026-01-20T23:55:30.544330959Z" level=info msg="StartContainer for \"256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900\" returns successfully" Jan 20 23:55:30.589074 containerd[1592]: time="2026-01-20T23:55:30.589015399Z" level=info msg="StartContainer for \"d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b\" returns successfully" Jan 20 23:55:30.776724 kubelet[2467]: E0120 23:55:30.776524 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.776724 kubelet[2467]: E0120 23:55:30.776535 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:30.781831 kubelet[2467]: E0120 23:55:30.781794 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:31.336773 kubelet[2467]: I0120 23:55:31.336736 2467 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:31.785634 kubelet[2467]: E0120 23:55:31.785599 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:31.786295 kubelet[2467]: E0120 23:55:31.786136 2467 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.594059 kubelet[2467]: E0120 23:55:32.592681 2467 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-n-f640cc67e1\" not found" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.708084 kubelet[2467]: I0120 23:55:32.706227 2467 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.712945 kubelet[2467]: I0120 23:55:32.712906 2467 apiserver.go:52] "Watching apiserver" Jan 20 23:55:32.727484 kubelet[2467]: I0120 23:55:32.727443 2467 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.732148 kubelet[2467]: I0120 23:55:32.732097 2467 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 23:55:32.754409 kubelet[2467]: E0120 23:55:32.754362 2467 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-f640cc67e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.754409 kubelet[2467]: I0120 23:55:32.754407 2467 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.762387 kubelet[2467]: E0120 23:55:32.762123 2467 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.762387 kubelet[2467]: I0120 23:55:32.762156 2467 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:32.764820 kubelet[2467]: E0120 23:55:32.764787 2467 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:33.128069 kubelet[2467]: I0120 23:55:33.126742 2467 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:33.130094 kubelet[2467]: E0120 23:55:33.130060 2467 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:34.763163 systemd[1]: Reload requested from client PID 2732 ('systemctl') (unit session-8.scope)... Jan 20 23:55:34.763196 systemd[1]: Reloading... Jan 20 23:55:34.875079 zram_generator::config[2785]: No configuration found. Jan 20 23:55:35.085935 systemd[1]: Reloading finished in 322 ms. Jan 20 23:55:35.116147 kubelet[2467]: I0120 23:55:35.116089 2467 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:55:35.116614 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:35.129646 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 23:55:35.130003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:35.132886 kernel: kauditd_printk_skb: 158 callbacks suppressed Jan 20 23:55:35.132930 kernel: audit: type=1131 audit(1768953335.129:388): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:35.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:35.130110 systemd[1]: kubelet.service: Consumed 2.326s CPU time, 126.5M memory peak. Jan 20 23:55:35.135247 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:55:35.134000 audit: BPF prog-id=111 op=LOAD Jan 20 23:55:35.141559 kernel: audit: type=1334 audit(1768953335.134:389): prog-id=111 op=LOAD Jan 20 23:55:35.141657 kernel: audit: type=1334 audit(1768953335.135:390): prog-id=68 op=UNLOAD Jan 20 23:55:35.141677 kernel: audit: type=1334 audit(1768953335.135:391): prog-id=112 op=LOAD Jan 20 23:55:35.141707 kernel: audit: type=1334 audit(1768953335.135:392): prog-id=113 op=LOAD Jan 20 23:55:35.141725 kernel: audit: type=1334 audit(1768953335.135:393): prog-id=69 op=UNLOAD Jan 20 23:55:35.141743 kernel: audit: type=1334 audit(1768953335.135:394): prog-id=70 op=UNLOAD Jan 20 23:55:35.135000 audit: BPF prog-id=68 op=UNLOAD Jan 20 23:55:35.135000 audit: BPF prog-id=112 op=LOAD Jan 20 23:55:35.135000 audit: BPF prog-id=113 op=LOAD Jan 20 23:55:35.135000 audit: BPF prog-id=69 op=UNLOAD Jan 20 23:55:35.135000 audit: BPF prog-id=70 op=UNLOAD Jan 20 23:55:35.145297 kernel: audit: type=1334 audit(1768953335.136:395): prog-id=114 op=LOAD Jan 20 23:55:35.145401 kernel: audit: type=1334 audit(1768953335.136:396): prog-id=115 op=LOAD Jan 20 23:55:35.145430 kernel: audit: type=1334 audit(1768953335.136:397): prog-id=72 op=UNLOAD Jan 20 23:55:35.136000 audit: BPF prog-id=114 op=LOAD Jan 20 23:55:35.136000 audit: BPF prog-id=115 op=LOAD Jan 20 23:55:35.136000 audit: BPF prog-id=72 op=UNLOAD Jan 20 23:55:35.136000 audit: BPF prog-id=73 op=UNLOAD Jan 20 23:55:35.136000 audit: BPF prog-id=116 op=LOAD Jan 20 23:55:35.136000 audit: BPF prog-id=71 op=UNLOAD Jan 20 23:55:35.140000 audit: BPF prog-id=117 op=LOAD Jan 20 23:55:35.140000 audit: BPF prog-id=61 op=UNLOAD Jan 20 23:55:35.141000 audit: BPF prog-id=118 op=LOAD Jan 20 23:55:35.141000 audit: BPF prog-id=119 op=LOAD Jan 20 23:55:35.141000 audit: BPF prog-id=62 op=UNLOAD Jan 20 23:55:35.141000 audit: BPF prog-id=63 op=UNLOAD Jan 20 23:55:35.141000 audit: BPF prog-id=120 op=LOAD Jan 20 23:55:35.141000 audit: BPF prog-id=67 op=UNLOAD Jan 20 23:55:35.142000 audit: BPF prog-id=121 op=LOAD Jan 20 23:55:35.142000 audit: BPF prog-id=74 op=UNLOAD Jan 20 23:55:35.143000 audit: BPF prog-id=122 op=LOAD Jan 20 23:55:35.143000 audit: BPF prog-id=64 op=UNLOAD Jan 20 23:55:35.143000 audit: BPF prog-id=123 op=LOAD Jan 20 23:55:35.144000 audit: BPF prog-id=124 op=LOAD Jan 20 23:55:35.144000 audit: BPF prog-id=65 op=UNLOAD Jan 20 23:55:35.144000 audit: BPF prog-id=66 op=UNLOAD Jan 20 23:55:35.144000 audit: BPF prog-id=125 op=LOAD Jan 20 23:55:35.144000 audit: BPF prog-id=78 op=UNLOAD Jan 20 23:55:35.144000 audit: BPF prog-id=126 op=LOAD Jan 20 23:55:35.150000 audit: BPF prog-id=127 op=LOAD Jan 20 23:55:35.150000 audit: BPF prog-id=79 op=UNLOAD Jan 20 23:55:35.150000 audit: BPF prog-id=80 op=UNLOAD Jan 20 23:55:35.150000 audit: BPF prog-id=128 op=LOAD Jan 20 23:55:35.150000 audit: BPF prog-id=75 op=UNLOAD Jan 20 23:55:35.150000 audit: BPF prog-id=129 op=LOAD Jan 20 23:55:35.150000 audit: BPF prog-id=130 op=LOAD Jan 20 23:55:35.150000 audit: BPF prog-id=76 op=UNLOAD Jan 20 23:55:35.150000 audit: BPF prog-id=77 op=UNLOAD Jan 20 23:55:35.311516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:55:35.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:35.324448 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 23:55:35.379020 kubelet[2824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:55:35.379721 kubelet[2824]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 23:55:35.379787 kubelet[2824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:55:35.380302 kubelet[2824]: I0120 23:55:35.380224 2824 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 23:55:35.389916 kubelet[2824]: I0120 23:55:35.389871 2824 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 20 23:55:35.389916 kubelet[2824]: I0120 23:55:35.389906 2824 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 23:55:35.390439 kubelet[2824]: I0120 23:55:35.390409 2824 server.go:954] "Client rotation is on, will bootstrap in background" Jan 20 23:55:35.392070 kubelet[2824]: I0120 23:55:35.392017 2824 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 23:55:35.394956 kubelet[2824]: I0120 23:55:35.394528 2824 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:55:35.404133 kubelet[2824]: I0120 23:55:35.403954 2824 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 23:55:35.406966 kubelet[2824]: I0120 23:55:35.406938 2824 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 23:55:35.407185 kubelet[2824]: I0120 23:55:35.407155 2824 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 23:55:35.407408 kubelet[2824]: I0120 23:55:35.407185 2824 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-f640cc67e1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 23:55:35.407408 kubelet[2824]: I0120 23:55:35.407398 2824 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 23:55:35.407408 kubelet[2824]: I0120 23:55:35.407409 2824 container_manager_linux.go:304] "Creating device plugin manager" Jan 20 23:55:35.407569 kubelet[2824]: I0120 23:55:35.407456 2824 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:55:35.407611 kubelet[2824]: I0120 23:55:35.407598 2824 kubelet.go:446] "Attempting to sync node with API server" Jan 20 23:55:35.407641 kubelet[2824]: I0120 23:55:35.407612 2824 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 23:55:35.407641 kubelet[2824]: I0120 23:55:35.407636 2824 kubelet.go:352] "Adding apiserver pod source" Jan 20 23:55:35.407879 kubelet[2824]: I0120 23:55:35.407649 2824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 23:55:35.415442 kubelet[2824]: I0120 23:55:35.415397 2824 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 23:55:35.418065 kubelet[2824]: I0120 23:55:35.417157 2824 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 23:55:35.418904 kubelet[2824]: I0120 23:55:35.418873 2824 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 23:55:35.418968 kubelet[2824]: I0120 23:55:35.418922 2824 server.go:1287] "Started kubelet" Jan 20 23:55:35.425812 kubelet[2824]: I0120 23:55:35.425779 2824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 23:55:35.434634 kubelet[2824]: I0120 23:55:35.434579 2824 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 23:55:35.435514 kubelet[2824]: I0120 23:55:35.435493 2824 server.go:479] "Adding debug handlers to kubelet server" Jan 20 23:55:35.436845 kubelet[2824]: I0120 23:55:35.436501 2824 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 23:55:35.436845 kubelet[2824]: I0120 23:55:35.436705 2824 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 23:55:35.437462 kubelet[2824]: I0120 23:55:35.437426 2824 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 23:55:35.439511 kubelet[2824]: I0120 23:55:35.439491 2824 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 23:55:35.439912 kubelet[2824]: I0120 23:55:35.439895 2824 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 23:55:35.440175 kubelet[2824]: I0120 23:55:35.440163 2824 reconciler.go:26] "Reconciler: start to sync state" Jan 20 23:55:35.444116 kubelet[2824]: I0120 23:55:35.444087 2824 factory.go:221] Registration of the systemd container factory successfully Jan 20 23:55:35.444231 kubelet[2824]: I0120 23:55:35.444212 2824 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 23:55:35.451677 kubelet[2824]: I0120 23:55:35.450592 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 23:55:35.452990 kubelet[2824]: I0120 23:55:35.452966 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 23:55:35.453129 kubelet[2824]: I0120 23:55:35.453116 2824 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 20 23:55:35.453202 kubelet[2824]: I0120 23:55:35.453192 2824 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 23:55:35.453510 kubelet[2824]: I0120 23:55:35.453253 2824 kubelet.go:2382] "Starting kubelet main sync loop" Jan 20 23:55:35.453510 kubelet[2824]: E0120 23:55:35.453305 2824 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 23:55:35.453929 kubelet[2824]: I0120 23:55:35.453894 2824 factory.go:221] Registration of the containerd container factory successfully Jan 20 23:55:35.521149 kubelet[2824]: I0120 23:55:35.521117 2824 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521338 2824 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521385 2824 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521594 2824 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521608 2824 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521632 2824 policy_none.go:49] "None policy: Start" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521642 2824 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521654 2824 state_mem.go:35] "Initializing new in-memory state store" Jan 20 23:55:35.522073 kubelet[2824]: I0120 23:55:35.521787 2824 state_mem.go:75] "Updated machine memory state" Jan 20 23:55:35.527776 kubelet[2824]: I0120 23:55:35.527738 2824 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 23:55:35.529080 kubelet[2824]: I0120 23:55:35.528656 2824 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 23:55:35.529080 kubelet[2824]: I0120 23:55:35.528681 2824 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 23:55:35.530910 kubelet[2824]: I0120 23:55:35.530884 2824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 23:55:35.533727 kubelet[2824]: E0120 23:55:35.533259 2824 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 23:55:35.554477 kubelet[2824]: I0120 23:55:35.554438 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.555681 kubelet[2824]: I0120 23:55:35.555637 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.556893 kubelet[2824]: I0120 23:55:35.556867 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.639619 kubelet[2824]: I0120 23:55:35.638507 2824 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.652082 kubelet[2824]: I0120 23:55:35.651605 2824 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.652082 kubelet[2824]: I0120 23:55:35.651696 2824 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742498 kubelet[2824]: I0120 23:55:35.742434 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ebd7f09efce7aeeaef6aff5a8d09ec8-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" (UID: \"1ebd7f09efce7aeeaef6aff5a8d09ec8\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742700 kubelet[2824]: I0120 23:55:35.742514 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742700 kubelet[2824]: I0120 23:55:35.742563 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/496b5dbb2f91bd085f1ca33c452062c2-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-f640cc67e1\" (UID: \"496b5dbb2f91bd085f1ca33c452062c2\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742700 kubelet[2824]: I0120 23:55:35.742604 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ebd7f09efce7aeeaef6aff5a8d09ec8-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" (UID: \"1ebd7f09efce7aeeaef6aff5a8d09ec8\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742700 kubelet[2824]: I0120 23:55:35.742643 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742700 kubelet[2824]: I0120 23:55:35.742685 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742854 kubelet[2824]: I0120 23:55:35.742726 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742854 kubelet[2824]: I0120 23:55:35.742765 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee8055c904db73e56bf50c1df7a9426e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-f640cc67e1\" (UID: \"ee8055c904db73e56bf50c1df7a9426e\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:35.742854 kubelet[2824]: I0120 23:55:35.742808 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ebd7f09efce7aeeaef6aff5a8d09ec8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" (UID: \"1ebd7f09efce7aeeaef6aff5a8d09ec8\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:36.414123 kubelet[2824]: I0120 23:55:36.413808 2824 apiserver.go:52] "Watching apiserver" Jan 20 23:55:36.440449 kubelet[2824]: I0120 23:55:36.440361 2824 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 23:55:36.499889 kubelet[2824]: I0120 23:55:36.499846 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:36.500852 kubelet[2824]: I0120 23:55:36.500808 2824 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:36.511032 kubelet[2824]: E0120 23:55:36.510979 2824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-f640cc67e1\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:36.511729 kubelet[2824]: E0120 23:55:36.511695 2824 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-f640cc67e1\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" Jan 20 23:55:36.552350 kubelet[2824]: I0120 23:55:36.552263 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-n-f640cc67e1" podStartSLOduration=1.552238088 podStartE2EDuration="1.552238088s" podCreationTimestamp="2026-01-20 23:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:55:36.539108604 +0000 UTC m=+1.208862099" watchObservedRunningTime="2026-01-20 23:55:36.552238088 +0000 UTC m=+1.221991503" Jan 20 23:55:36.569192 kubelet[2824]: I0120 23:55:36.569113 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" podStartSLOduration=1.5690915539999999 podStartE2EDuration="1.569091554s" podCreationTimestamp="2026-01-20 23:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:55:36.553143992 +0000 UTC m=+1.222897447" watchObservedRunningTime="2026-01-20 23:55:36.569091554 +0000 UTC m=+1.238844969" Jan 20 23:55:42.245480 kubelet[2824]: I0120 23:55:42.245372 2824 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 23:55:42.246986 containerd[1592]: time="2026-01-20T23:55:42.246941638Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 23:55:42.248239 kubelet[2824]: I0120 23:55:42.248024 2824 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 23:55:42.882347 kubelet[2824]: I0120 23:55:42.882266 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-n-f640cc67e1" podStartSLOduration=7.882219632 podStartE2EDuration="7.882219632s" podCreationTimestamp="2026-01-20 23:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:55:36.569814805 +0000 UTC m=+1.239568220" watchObservedRunningTime="2026-01-20 23:55:42.882219632 +0000 UTC m=+7.551973047" Jan 20 23:55:42.892432 systemd[1]: Created slice kubepods-besteffort-pod1aa42fda_298d_4d10_860f_51e0d63308ad.slice - libcontainer container kubepods-besteffort-pod1aa42fda_298d_4d10_860f_51e0d63308ad.slice. Jan 20 23:55:42.989320 kubelet[2824]: I0120 23:55:42.989179 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aa42fda-298d-4d10-860f-51e0d63308ad-xtables-lock\") pod \"kube-proxy-l8rww\" (UID: \"1aa42fda-298d-4d10-860f-51e0d63308ad\") " pod="kube-system/kube-proxy-l8rww" Jan 20 23:55:42.989697 kubelet[2824]: I0120 23:55:42.989281 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aa42fda-298d-4d10-860f-51e0d63308ad-lib-modules\") pod \"kube-proxy-l8rww\" (UID: \"1aa42fda-298d-4d10-860f-51e0d63308ad\") " pod="kube-system/kube-proxy-l8rww" Jan 20 23:55:42.989697 kubelet[2824]: I0120 23:55:42.989613 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzbss\" (UniqueName: \"kubernetes.io/projected/1aa42fda-298d-4d10-860f-51e0d63308ad-kube-api-access-jzbss\") pod \"kube-proxy-l8rww\" (UID: \"1aa42fda-298d-4d10-860f-51e0d63308ad\") " pod="kube-system/kube-proxy-l8rww" Jan 20 23:55:42.989936 kubelet[2824]: I0120 23:55:42.989670 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1aa42fda-298d-4d10-860f-51e0d63308ad-kube-proxy\") pod \"kube-proxy-l8rww\" (UID: \"1aa42fda-298d-4d10-860f-51e0d63308ad\") " pod="kube-system/kube-proxy-l8rww" Jan 20 23:55:43.202672 containerd[1592]: time="2026-01-20T23:55:43.202488213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l8rww,Uid:1aa42fda-298d-4d10-860f-51e0d63308ad,Namespace:kube-system,Attempt:0,}" Jan 20 23:55:43.227112 containerd[1592]: time="2026-01-20T23:55:43.226345921Z" level=info msg="connecting to shim 74272da962437581ad60d89a34bf703010c2535eeefb513da20a2eead1696cee" address="unix:///run/containerd/s/bcae9ee97a2907ff1b194a1356ff51fc40ae6e545d620db22ea4ca4f5831d2ad" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:55:43.257656 systemd[1]: Started cri-containerd-74272da962437581ad60d89a34bf703010c2535eeefb513da20a2eead1696cee.scope - libcontainer container 74272da962437581ad60d89a34bf703010c2535eeefb513da20a2eead1696cee. Jan 20 23:55:43.271664 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 20 23:55:43.271758 kernel: audit: type=1334 audit(1768953343.269:430): prog-id=131 op=LOAD Jan 20 23:55:43.269000 audit: BPF prog-id=131 op=LOAD Jan 20 23:55:43.269000 audit: BPF prog-id=132 op=LOAD Jan 20 23:55:43.276313 kernel: audit: type=1334 audit(1768953343.269:431): prog-id=132 op=LOAD Jan 20 23:55:43.276411 kernel: audit: type=1300 audit(1768953343.269:431): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.269000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.280065 kernel: audit: type=1327 audit(1768953343.269:431): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.280200 kernel: audit: type=1334 audit(1768953343.270:432): prog-id=132 op=UNLOAD Jan 20 23:55:43.270000 audit: BPF prog-id=132 op=UNLOAD Jan 20 23:55:43.270000 audit[2887]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.282707 kernel: audit: type=1300 audit(1768953343.270:432): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.282793 kernel: audit: type=1327 audit(1768953343.270:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.270000 audit: BPF prog-id=133 op=LOAD Jan 20 23:55:43.285404 kernel: audit: type=1334 audit(1768953343.270:433): prog-id=133 op=LOAD Jan 20 23:55:43.285444 kernel: audit: type=1300 audit(1768953343.270:433): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.270000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.289939 kernel: audit: type=1327 audit(1768953343.270:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.270000 audit: BPF prog-id=134 op=LOAD Jan 20 23:55:43.270000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.270000 audit: BPF prog-id=134 op=UNLOAD Jan 20 23:55:43.270000 audit[2887]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.270000 audit: BPF prog-id=133 op=UNLOAD Jan 20 23:55:43.270000 audit[2887]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.270000 audit: BPF prog-id=135 op=LOAD Jan 20 23:55:43.270000 audit[2887]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2876 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.270000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734323732646139363234333735383161643630643839613334626637 Jan 20 23:55:43.313518 containerd[1592]: time="2026-01-20T23:55:43.313469944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l8rww,Uid:1aa42fda-298d-4d10-860f-51e0d63308ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"74272da962437581ad60d89a34bf703010c2535eeefb513da20a2eead1696cee\"" Jan 20 23:55:43.317702 containerd[1592]: time="2026-01-20T23:55:43.317651811Z" level=info msg="CreateContainer within sandbox \"74272da962437581ad60d89a34bf703010c2535eeefb513da20a2eead1696cee\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 23:55:43.338150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2786551607.mount: Deactivated successfully. Jan 20 23:55:43.341216 containerd[1592]: time="2026-01-20T23:55:43.341162624Z" level=info msg="Container 17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:55:43.349255 systemd[1]: Created slice kubepods-besteffort-pod504a5f39_e7ab_4e84_a061_fdc3bdd4d22a.slice - libcontainer container kubepods-besteffort-pod504a5f39_e7ab_4e84_a061_fdc3bdd4d22a.slice. Jan 20 23:55:43.354700 containerd[1592]: time="2026-01-20T23:55:43.354637788Z" level=info msg="CreateContainer within sandbox \"74272da962437581ad60d89a34bf703010c2535eeefb513da20a2eead1696cee\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065\"" Jan 20 23:55:43.356408 containerd[1592]: time="2026-01-20T23:55:43.356379066Z" level=info msg="StartContainer for \"17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065\"" Jan 20 23:55:43.361071 containerd[1592]: time="2026-01-20T23:55:43.359548168Z" level=info msg="connecting to shim 17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065" address="unix:///run/containerd/s/bcae9ee97a2907ff1b194a1356ff51fc40ae6e545d620db22ea4ca4f5831d2ad" protocol=ttrpc version=3 Jan 20 23:55:43.387442 systemd[1]: Started cri-containerd-17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065.scope - libcontainer container 17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065. Jan 20 23:55:43.391723 kubelet[2824]: I0120 23:55:43.391636 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrl9\" (UniqueName: \"kubernetes.io/projected/504a5f39-e7ab-4e84-a061-fdc3bdd4d22a-kube-api-access-vdrl9\") pod \"tigera-operator-7dcd859c48-q64bh\" (UID: \"504a5f39-e7ab-4e84-a061-fdc3bdd4d22a\") " pod="tigera-operator/tigera-operator-7dcd859c48-q64bh" Jan 20 23:55:43.392696 kubelet[2824]: I0120 23:55:43.391724 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/504a5f39-e7ab-4e84-a061-fdc3bdd4d22a-var-lib-calico\") pod \"tigera-operator-7dcd859c48-q64bh\" (UID: \"504a5f39-e7ab-4e84-a061-fdc3bdd4d22a\") " pod="tigera-operator/tigera-operator-7dcd859c48-q64bh" Jan 20 23:55:43.454000 audit: BPF prog-id=136 op=LOAD Jan 20 23:55:43.454000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2876 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653535343834353037663835333734383535343930393836386336 Jan 20 23:55:43.454000 audit: BPF prog-id=137 op=LOAD Jan 20 23:55:43.454000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2876 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653535343834353037663835333734383535343930393836386336 Jan 20 23:55:43.454000 audit: BPF prog-id=137 op=UNLOAD Jan 20 23:55:43.454000 audit[2913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653535343834353037663835333734383535343930393836386336 Jan 20 23:55:43.454000 audit: BPF prog-id=136 op=UNLOAD Jan 20 23:55:43.454000 audit[2913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2876 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653535343834353037663835333734383535343930393836386336 Jan 20 23:55:43.454000 audit: BPF prog-id=138 op=LOAD Jan 20 23:55:43.454000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2876 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.454000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137653535343834353037663835333734383535343930393836386336 Jan 20 23:55:43.477886 containerd[1592]: time="2026-01-20T23:55:43.477838427Z" level=info msg="StartContainer for \"17e55484507f853748554909868c6658c09a299cb580c4aadf617da76a0fc065\" returns successfully" Jan 20 23:55:43.530178 kubelet[2824]: I0120 23:55:43.530097 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l8rww" podStartSLOduration=1.530059566 podStartE2EDuration="1.530059566s" podCreationTimestamp="2026-01-20 23:55:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:55:43.52881239 +0000 UTC m=+8.198565805" watchObservedRunningTime="2026-01-20 23:55:43.530059566 +0000 UTC m=+8.199812981" Jan 20 23:55:43.656909 containerd[1592]: time="2026-01-20T23:55:43.656582953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-q64bh,Uid:504a5f39-e7ab-4e84-a061-fdc3bdd4d22a,Namespace:tigera-operator,Attempt:0,}" Jan 20 23:55:43.661000 audit[2981]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=2981 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.661000 audit[2981]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe78205e0 a2=0 a3=1 items=0 ppid=2925 pid=2981 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.661000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 23:55:43.661000 audit[2979]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=2979 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.661000 audit[2979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc37cad80 a2=0 a3=1 items=0 ppid=2925 pid=2979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.661000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 23:55:43.662000 audit[2983]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=2983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.662000 audit[2983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb3fadb0 a2=0 a3=1 items=0 ppid=2925 pid=2983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.662000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 23:55:43.663000 audit[2982]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=2982 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.663000 audit[2982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4c13690 a2=0 a3=1 items=0 ppid=2925 pid=2982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 23:55:43.664000 audit[2985]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=2985 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.664000 audit[2985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcd9f7c80 a2=0 a3=1 items=0 ppid=2925 pid=2985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.664000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 23:55:43.665000 audit[2984]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=2984 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.665000 audit[2984]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5a1f960 a2=0 a3=1 items=0 ppid=2925 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.665000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 23:55:43.681707 containerd[1592]: time="2026-01-20T23:55:43.681649556Z" level=info msg="connecting to shim 407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734" address="unix:///run/containerd/s/736c5e97012321b6658e67214af4a828d2f8738f592600cac6fa600382173113" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:55:43.702413 systemd[1]: Started cri-containerd-407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734.scope - libcontainer container 407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734. Jan 20 23:55:43.721000 audit: BPF prog-id=139 op=LOAD Jan 20 23:55:43.722000 audit: BPF prog-id=140 op=LOAD Jan 20 23:55:43.722000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.722000 audit: BPF prog-id=140 op=UNLOAD Jan 20 23:55:43.722000 audit[3005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.722000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.723000 audit: BPF prog-id=141 op=LOAD Jan 20 23:55:43.723000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.723000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.724000 audit: BPF prog-id=142 op=LOAD Jan 20 23:55:43.724000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.724000 audit: BPF prog-id=142 op=UNLOAD Jan 20 23:55:43.724000 audit[3005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.724000 audit: BPF prog-id=141 op=UNLOAD Jan 20 23:55:43.724000 audit[3005]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.724000 audit: BPF prog-id=143 op=LOAD Jan 20 23:55:43.724000 audit[3005]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2994 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.724000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430376638376636636164656665653631363962373536636664363536 Jan 20 23:55:43.750549 containerd[1592]: time="2026-01-20T23:55:43.750461158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-q64bh,Uid:504a5f39-e7ab-4e84-a061-fdc3bdd4d22a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734\"" Jan 20 23:55:43.756071 containerd[1592]: time="2026-01-20T23:55:43.754929559Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 23:55:43.769000 audit[3032]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.769000 audit[3032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd9ac3080 a2=0 a3=1 items=0 ppid=2925 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 23:55:43.774000 audit[3034]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.774000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcf488660 a2=0 a3=1 items=0 ppid=2925 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 23:55:43.779000 audit[3037]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.779000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff7515950 a2=0 a3=1 items=0 ppid=2925 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.779000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 23:55:43.780000 audit[3038]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.780000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0c97360 a2=0 a3=1 items=0 ppid=2925 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.780000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 23:55:43.783000 audit[3040]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.783000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd7c6d5d0 a2=0 a3=1 items=0 ppid=2925 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.783000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 23:55:43.784000 audit[3041]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3041 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.784000 audit[3041]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe67433e0 a2=0 a3=1 items=0 ppid=2925 pid=3041 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.784000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 23:55:43.787000 audit[3043]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3043 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.787000 audit[3043]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdcd88630 a2=0 a3=1 items=0 ppid=2925 pid=3043 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 23:55:43.793000 audit[3046]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.793000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeb591b70 a2=0 a3=1 items=0 ppid=2925 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.793000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 23:55:43.794000 audit[3047]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.794000 audit[3047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc8a1f90 a2=0 a3=1 items=0 ppid=2925 pid=3047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 23:55:43.798000 audit[3049]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.798000 audit[3049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe4389f50 a2=0 a3=1 items=0 ppid=2925 pid=3049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 23:55:43.799000 audit[3050]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.799000 audit[3050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9f96d30 a2=0 a3=1 items=0 ppid=2925 pid=3050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 23:55:43.802000 audit[3052]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.802000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea71c3d0 a2=0 a3=1 items=0 ppid=2925 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 23:55:43.806000 audit[3055]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.806000 audit[3055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd5eea550 a2=0 a3=1 items=0 ppid=2925 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 23:55:43.810000 audit[3058]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.810000 audit[3058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffa283030 a2=0 a3=1 items=0 ppid=2925 pid=3058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 23:55:43.811000 audit[3059]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.811000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd6c89710 a2=0 a3=1 items=0 ppid=2925 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 23:55:43.814000 audit[3061]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.814000 audit[3061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcaf5f6b0 a2=0 a3=1 items=0 ppid=2925 pid=3061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.814000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:55:43.819000 audit[3064]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.819000 audit[3064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc2bcbe70 a2=0 a3=1 items=0 ppid=2925 pid=3064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:55:43.821000 audit[3065]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.821000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc41ccd0 a2=0 a3=1 items=0 ppid=2925 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 23:55:43.825000 audit[3067]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3067 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:55:43.825000 audit[3067]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffdf858a90 a2=0 a3=1 items=0 ppid=2925 pid=3067 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.825000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 23:55:43.849000 audit[3073]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:43.849000 audit[3073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb5eb420 a2=0 a3=1 items=0 ppid=2925 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:43.858000 audit[3073]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:43.858000 audit[3073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffcb5eb420 a2=0 a3=1 items=0 ppid=2925 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:43.862000 audit[3078]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.862000 audit[3078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe9f2bd80 a2=0 a3=1 items=0 ppid=2925 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.862000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 23:55:43.865000 audit[3080]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.865000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcfaaba20 a2=0 a3=1 items=0 ppid=2925 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.865000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 23:55:43.871000 audit[3083]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.871000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdcd42950 a2=0 a3=1 items=0 ppid=2925 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.871000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 23:55:43.873000 audit[3084]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.873000 audit[3084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd010010 a2=0 a3=1 items=0 ppid=2925 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 23:55:43.877000 audit[3086]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.877000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca85ad20 a2=0 a3=1 items=0 ppid=2925 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 23:55:43.878000 audit[3087]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.878000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc463760 a2=0 a3=1 items=0 ppid=2925 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 23:55:43.881000 audit[3089]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.881000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffffa055a0 a2=0 a3=1 items=0 ppid=2925 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 23:55:43.886000 audit[3092]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.886000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd23d3e90 a2=0 a3=1 items=0 ppid=2925 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 23:55:43.887000 audit[3093]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.887000 audit[3093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe12b30f0 a2=0 a3=1 items=0 ppid=2925 pid=3093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 23:55:43.890000 audit[3095]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.890000 audit[3095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffda13c1e0 a2=0 a3=1 items=0 ppid=2925 pid=3095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 23:55:43.892000 audit[3096]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3096 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.892000 audit[3096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe145d030 a2=0 a3=1 items=0 ppid=2925 pid=3096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.892000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 23:55:43.894000 audit[3098]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3098 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.894000 audit[3098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd107f100 a2=0 a3=1 items=0 ppid=2925 pid=3098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 23:55:43.898000 audit[3101]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.898000 audit[3101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff391b040 a2=0 a3=1 items=0 ppid=2925 pid=3101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 23:55:43.903000 audit[3104]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3104 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.903000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff177f560 a2=0 a3=1 items=0 ppid=2925 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 23:55:43.904000 audit[3105]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.904000 audit[3105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd6135080 a2=0 a3=1 items=0 ppid=2925 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.904000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 23:55:43.906000 audit[3107]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.906000 audit[3107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd419d300 a2=0 a3=1 items=0 ppid=2925 pid=3107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.906000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:55:43.910000 audit[3110]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.910000 audit[3110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffee9c720 a2=0 a3=1 items=0 ppid=2925 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.910000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:55:43.911000 audit[3111]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.911000 audit[3111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc05e650 a2=0 a3=1 items=0 ppid=2925 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.911000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 23:55:43.913000 audit[3113]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.913000 audit[3113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff9408050 a2=0 a3=1 items=0 ppid=2925 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.913000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 23:55:43.915000 audit[3114]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3114 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.915000 audit[3114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc4420b0 a2=0 a3=1 items=0 ppid=2925 pid=3114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.915000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 23:55:43.917000 audit[3116]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.917000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffed4bddc0 a2=0 a3=1 items=0 ppid=2925 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.917000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:55:43.920000 audit[3119]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:55:43.920000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcac68020 a2=0 a3=1 items=0 ppid=2925 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.920000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:55:43.924000 audit[3121]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 23:55:43.924000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe9ea13d0 a2=0 a3=1 items=0 ppid=2925 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.924000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:43.924000 audit[3121]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 23:55:43.924000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe9ea13d0 a2=0 a3=1 items=0 ppid=2925 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:43.924000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:45.984068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866788928.mount: Deactivated successfully. Jan 20 23:55:46.539286 containerd[1592]: time="2026-01-20T23:55:46.539234533Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:46.541079 containerd[1592]: time="2026-01-20T23:55:46.540801231Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 20 23:55:46.541079 containerd[1592]: time="2026-01-20T23:55:46.540889634Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:46.546837 containerd[1592]: time="2026-01-20T23:55:46.546783852Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:55:46.549673 containerd[1592]: time="2026-01-20T23:55:46.549628877Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.793382299s" Jan 20 23:55:46.549814 containerd[1592]: time="2026-01-20T23:55:46.549797763Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 23:55:46.553942 containerd[1592]: time="2026-01-20T23:55:46.553839592Z" level=info msg="CreateContainer within sandbox \"407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 23:55:46.563059 containerd[1592]: time="2026-01-20T23:55:46.562556154Z" level=info msg="Container 1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:55:46.576357 containerd[1592]: time="2026-01-20T23:55:46.576205298Z" level=info msg="CreateContainer within sandbox \"407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9\"" Jan 20 23:55:46.579203 containerd[1592]: time="2026-01-20T23:55:46.579155527Z" level=info msg="StartContainer for \"1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9\"" Jan 20 23:55:46.580670 containerd[1592]: time="2026-01-20T23:55:46.580632621Z" level=info msg="connecting to shim 1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9" address="unix:///run/containerd/s/736c5e97012321b6658e67214af4a828d2f8738f592600cac6fa600382173113" protocol=ttrpc version=3 Jan 20 23:55:46.603304 systemd[1]: Started cri-containerd-1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9.scope - libcontainer container 1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9. Jan 20 23:55:46.615000 audit: BPF prog-id=144 op=LOAD Jan 20 23:55:46.616000 audit: BPF prog-id=145 op=LOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.616000 audit: BPF prog-id=145 op=UNLOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.616000 audit: BPF prog-id=146 op=LOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.616000 audit: BPF prog-id=147 op=LOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.616000 audit: BPF prog-id=147 op=UNLOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.616000 audit: BPF prog-id=146 op=UNLOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.616000 audit: BPF prog-id=148 op=LOAD Jan 20 23:55:46.616000 audit[3131]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2994 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:46.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134323934313938353262363763633231636338306233613265636534 Jan 20 23:55:46.639323 containerd[1592]: time="2026-01-20T23:55:46.639185662Z" level=info msg="StartContainer for \"1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9\" returns successfully" Jan 20 23:55:47.545429 kubelet[2824]: I0120 23:55:47.545018 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-q64bh" podStartSLOduration=1.746898704 podStartE2EDuration="4.54499808s" podCreationTimestamp="2026-01-20 23:55:43 +0000 UTC" firstStartedPulling="2026-01-20 23:55:43.753353128 +0000 UTC m=+8.423106583" lastFinishedPulling="2026-01-20 23:55:46.551452544 +0000 UTC m=+11.221205959" observedRunningTime="2026-01-20 23:55:47.544571625 +0000 UTC m=+12.214325040" watchObservedRunningTime="2026-01-20 23:55:47.54499808 +0000 UTC m=+12.214751455" Jan 20 23:55:52.916946 sudo[1884]: pam_unix(sudo:session): session closed for user root Jan 20 23:55:52.919719 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 23:55:52.919821 kernel: audit: type=1106 audit(1768953352.915:510): pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:52.915000 audit[1884]: USER_END pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:52.916000 audit[1884]: CRED_DISP pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:52.921763 kernel: audit: type=1104 audit(1768953352.916:511): pid=1884 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:55:53.014364 sshd[1883]: Connection closed by 20.161.92.111 port 40654 Jan 20 23:55:53.016289 sshd-session[1879]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:53.018000 audit[1879]: USER_END pid=1879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:53.018000 audit[1879]: CRED_DISP pid=1879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:53.023813 kernel: audit: type=1106 audit(1768953353.018:512): pid=1879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:53.023870 kernel: audit: type=1104 audit(1768953353.018:513): pid=1879 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:53.029164 kernel: audit: type=1131 audit(1768953353.024:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-188.245.60.37:22-20.161.92.111:40654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:53.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-188.245.60.37:22-20.161.92.111:40654 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:53.025546 systemd[1]: sshd@7-188.245.60.37:22-20.161.92.111:40654.service: Deactivated successfully. Jan 20 23:55:53.031191 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 23:55:53.031453 systemd[1]: session-8.scope: Consumed 7.490s CPU time, 219.1M memory peak. Jan 20 23:55:53.033835 systemd-logind[1569]: Session 8 logged out. Waiting for processes to exit. Jan 20 23:55:53.039169 systemd-logind[1569]: Removed session 8. Jan 20 23:55:56.405000 audit[3212]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:56.405000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd41d6cd0 a2=0 a3=1 items=0 ppid=2925 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.409227 kernel: audit: type=1325 audit(1768953356.405:515): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:56.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:56.413067 kernel: audit: type=1300 audit(1768953356.405:515): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd41d6cd0 a2=0 a3=1 items=0 ppid=2925 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.413128 kernel: audit: type=1327 audit(1768953356.405:515): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:56.414000 audit[3212]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:56.414000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd41d6cd0 a2=0 a3=1 items=0 ppid=2925 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.419611 kernel: audit: type=1325 audit(1768953356.414:516): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:56.419672 kernel: audit: type=1300 audit(1768953356.414:516): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd41d6cd0 a2=0 a3=1 items=0 ppid=2925 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:56.429000 audit[3214]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:56.429000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcf9641c0 a2=0 a3=1 items=0 ppid=2925 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.429000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:56.434000 audit[3214]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3214 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:56.434000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf9641c0 a2=0 a3=1 items=0 ppid=2925 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.434000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.642302 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 23:56:00.642429 kernel: audit: type=1325 audit(1768953360.640:519): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.640000 audit[3216]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.640000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2f923b0 a2=0 a3=1 items=0 ppid=2925 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.640000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.647093 kernel: audit: type=1300 audit(1768953360.640:519): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2f923b0 a2=0 a3=1 items=0 ppid=2925 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.647269 kernel: audit: type=1327 audit(1768953360.640:519): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.648000 audit[3216]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.651057 kernel: audit: type=1325 audit(1768953360.648:520): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.653510 kernel: audit: type=1300 audit(1768953360.648:520): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2f923b0 a2=0 a3=1 items=0 ppid=2925 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.648000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2f923b0 a2=0 a3=1 items=0 ppid=2925 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.648000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.655626 kernel: audit: type=1327 audit(1768953360.648:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.661000 audit[3218]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.661000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe216d6a0 a2=0 a3=1 items=0 ppid=2925 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.666128 kernel: audit: type=1325 audit(1768953360.661:521): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.666220 kernel: audit: type=1300 audit(1768953360.661:521): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe216d6a0 a2=0 a3=1 items=0 ppid=2925 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.667820 kernel: audit: type=1327 audit(1768953360.661:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.670000 audit[3218]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:00.670000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe216d6a0 a2=0 a3=1 items=0 ppid=2925 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:00.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:00.672216 kernel: audit: type=1325 audit(1768953360.670:522): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:01.752000 audit[3220]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:01.752000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe74b9560 a2=0 a3=1 items=0 ppid=2925 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:01.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:01.758000 audit[3220]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:01.758000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe74b9560 a2=0 a3=1 items=0 ppid=2925 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:01.758000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:04.333000 audit[3223]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:04.333000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe7a80690 a2=0 a3=1 items=0 ppid=2925 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.333000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:04.337000 audit[3223]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:04.337000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe7a80690 a2=0 a3=1 items=0 ppid=2925 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.337000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:04.357000 audit[3225]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:04.357000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc65476a0 a2=0 a3=1 items=0 ppid=2925 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.357000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:04.375135 systemd[1]: Created slice kubepods-besteffort-pod16f9f37a_e7a0_406a_86cf_866e710be141.slice - libcontainer container kubepods-besteffort-pod16f9f37a_e7a0_406a_86cf_866e710be141.slice. Jan 20 23:56:04.405000 audit[3225]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:04.405000 audit[3225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc65476a0 a2=0 a3=1 items=0 ppid=2925 pid=3225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:04.436892 kubelet[2824]: I0120 23:56:04.436841 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16f9f37a-e7a0-406a-86cf-866e710be141-tigera-ca-bundle\") pod \"calico-typha-56545c98bd-rjb6f\" (UID: \"16f9f37a-e7a0-406a-86cf-866e710be141\") " pod="calico-system/calico-typha-56545c98bd-rjb6f" Jan 20 23:56:04.436892 kubelet[2824]: I0120 23:56:04.436894 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7fz2\" (UniqueName: \"kubernetes.io/projected/16f9f37a-e7a0-406a-86cf-866e710be141-kube-api-access-r7fz2\") pod \"calico-typha-56545c98bd-rjb6f\" (UID: \"16f9f37a-e7a0-406a-86cf-866e710be141\") " pod="calico-system/calico-typha-56545c98bd-rjb6f" Jan 20 23:56:04.436892 kubelet[2824]: I0120 23:56:04.436913 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/16f9f37a-e7a0-406a-86cf-866e710be141-typha-certs\") pod \"calico-typha-56545c98bd-rjb6f\" (UID: \"16f9f37a-e7a0-406a-86cf-866e710be141\") " pod="calico-system/calico-typha-56545c98bd-rjb6f" Jan 20 23:56:04.560791 systemd[1]: Created slice kubepods-besteffort-podea7a1529_5bd0_4b6f_b527_92cbaa7c8ae5.slice - libcontainer container kubepods-besteffort-podea7a1529_5bd0_4b6f_b527_92cbaa7c8ae5.slice. Jan 20 23:56:04.638402 kubelet[2824]: I0120 23:56:04.638260 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-cni-log-dir\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.638402 kubelet[2824]: I0120 23:56:04.638316 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-flexvol-driver-host\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.638402 kubelet[2824]: I0120 23:56:04.638340 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-policysync\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.638402 kubelet[2824]: I0120 23:56:04.638362 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-cni-bin-dir\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.638969 kubelet[2824]: I0120 23:56:04.638382 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-var-lib-calico\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639025 kubelet[2824]: I0120 23:56:04.639000 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cnt2\" (UniqueName: \"kubernetes.io/projected/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-kube-api-access-6cnt2\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639085 kubelet[2824]: I0120 23:56:04.639073 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-xtables-lock\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639114 kubelet[2824]: I0120 23:56:04.639101 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-tigera-ca-bundle\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639635 kubelet[2824]: I0120 23:56:04.639611 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-cni-net-dir\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639698 kubelet[2824]: I0120 23:56:04.639647 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-node-certs\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639734 kubelet[2824]: I0120 23:56:04.639707 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-lib-modules\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.639764 kubelet[2824]: I0120 23:56:04.639751 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5-var-run-calico\") pod \"calico-node-nwdhk\" (UID: \"ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5\") " pod="calico-system/calico-node-nwdhk" Jan 20 23:56:04.682943 containerd[1592]: time="2026-01-20T23:56:04.682635198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56545c98bd-rjb6f,Uid:16f9f37a-e7a0-406a-86cf-866e710be141,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:04.713573 containerd[1592]: time="2026-01-20T23:56:04.713515795Z" level=info msg="connecting to shim 4b65c820f08c2359b5159f87ab72f34ce103b94c5321f3d6685db2cff5bf0864" address="unix:///run/containerd/s/6bf6b0969d69268b2a23f9a7f3d70403709b828b2fb63446a365a3f2a1f6153e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:04.745518 kubelet[2824]: E0120 23:56:04.745462 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:04.754231 kubelet[2824]: E0120 23:56:04.754150 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.754231 kubelet[2824]: W0120 23:56:04.754172 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.754231 kubelet[2824]: E0120 23:56:04.754194 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.765899 kubelet[2824]: E0120 23:56:04.765798 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.765899 kubelet[2824]: W0120 23:56:04.765827 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.765899 kubelet[2824]: E0120 23:56:04.765856 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.769007 kubelet[2824]: E0120 23:56:04.768925 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.769545 kubelet[2824]: W0120 23:56:04.769518 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.769950 kubelet[2824]: E0120 23:56:04.769895 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.780420 systemd[1]: Started cri-containerd-4b65c820f08c2359b5159f87ab72f34ce103b94c5321f3d6685db2cff5bf0864.scope - libcontainer container 4b65c820f08c2359b5159f87ab72f34ce103b94c5321f3d6685db2cff5bf0864. Jan 20 23:56:04.812873 kubelet[2824]: E0120 23:56:04.812839 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.812873 kubelet[2824]: W0120 23:56:04.812865 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.813171 kubelet[2824]: E0120 23:56:04.812884 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.813171 kubelet[2824]: E0120 23:56:04.813112 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.813171 kubelet[2824]: W0120 23:56:04.813122 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.813171 kubelet[2824]: E0120 23:56:04.813138 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.813485 kubelet[2824]: E0120 23:56:04.813464 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.813485 kubelet[2824]: W0120 23:56:04.813480 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.813607 kubelet[2824]: E0120 23:56:04.813494 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.813913 kubelet[2824]: E0120 23:56:04.813678 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.813913 kubelet[2824]: W0120 23:56:04.813693 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.813913 kubelet[2824]: E0120 23:56:04.813702 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.814089 kubelet[2824]: E0120 23:56:04.813964 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.814089 kubelet[2824]: W0120 23:56:04.813974 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.814089 kubelet[2824]: E0120 23:56:04.813983 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.814402 kubelet[2824]: E0120 23:56:04.814149 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.814402 kubelet[2824]: W0120 23:56:04.814157 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.814402 kubelet[2824]: E0120 23:56:04.814165 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.814402 kubelet[2824]: E0120 23:56:04.814388 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.814402 kubelet[2824]: W0120 23:56:04.814400 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.814511 kubelet[2824]: E0120 23:56:04.814409 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.814807 kubelet[2824]: E0120 23:56:04.814542 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.814807 kubelet[2824]: W0120 23:56:04.814592 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.814807 kubelet[2824]: E0120 23:56:04.814603 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.815768 kubelet[2824]: E0120 23:56:04.814832 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.815768 kubelet[2824]: W0120 23:56:04.814841 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.815768 kubelet[2824]: E0120 23:56:04.814850 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.815768 kubelet[2824]: E0120 23:56:04.815384 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.815768 kubelet[2824]: W0120 23:56:04.815395 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.815768 kubelet[2824]: E0120 23:56:04.815596 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.816162 kubelet[2824]: E0120 23:56:04.816141 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.816162 kubelet[2824]: W0120 23:56:04.816156 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.816357 kubelet[2824]: E0120 23:56:04.816168 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.817283 kubelet[2824]: E0120 23:56:04.817265 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.817283 kubelet[2824]: W0120 23:56:04.817279 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.817445 kubelet[2824]: E0120 23:56:04.817291 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.818135 kubelet[2824]: E0120 23:56:04.818119 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.818135 kubelet[2824]: W0120 23:56:04.818135 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.818237 kubelet[2824]: E0120 23:56:04.818146 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.817000 audit: BPF prog-id=149 op=LOAD Jan 20 23:56:04.818000 audit: BPF prog-id=150 op=LOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.818000 audit: BPF prog-id=150 op=UNLOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.818000 audit: BPF prog-id=151 op=LOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.818000 audit: BPF prog-id=152 op=LOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.818000 audit: BPF prog-id=152 op=UNLOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.818000 audit: BPF prog-id=151 op=UNLOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.818000 audit: BPF prog-id=153 op=LOAD Jan 20 23:56:04.818000 audit[3249]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=3237 pid=3249 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462363563383230663038633233353962353135396638376162373266 Jan 20 23:56:04.820701 kubelet[2824]: E0120 23:56:04.819886 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.820701 kubelet[2824]: W0120 23:56:04.819898 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.820701 kubelet[2824]: E0120 23:56:04.819910 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.820701 kubelet[2824]: E0120 23:56:04.820384 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.820701 kubelet[2824]: W0120 23:56:04.820403 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.820701 kubelet[2824]: E0120 23:56:04.820431 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.820701 kubelet[2824]: E0120 23:56:04.820604 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.820701 kubelet[2824]: W0120 23:56:04.820612 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.820701 kubelet[2824]: E0120 23:56:04.820621 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.821348 kubelet[2824]: E0120 23:56:04.820941 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.821348 kubelet[2824]: W0120 23:56:04.820952 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.821348 kubelet[2824]: E0120 23:56:04.820964 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.821466 kubelet[2824]: E0120 23:56:04.821371 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.821466 kubelet[2824]: W0120 23:56:04.821382 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.821466 kubelet[2824]: E0120 23:56:04.821400 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.821705 kubelet[2824]: E0120 23:56:04.821683 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.821705 kubelet[2824]: W0120 23:56:04.821698 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.821705 kubelet[2824]: E0120 23:56:04.821708 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.821947 kubelet[2824]: E0120 23:56:04.821930 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.821990 kubelet[2824]: W0120 23:56:04.821958 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.821990 kubelet[2824]: E0120 23:56:04.821976 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.842023 kubelet[2824]: E0120 23:56:04.841967 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.842023 kubelet[2824]: W0120 23:56:04.841993 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.842023 kubelet[2824]: E0120 23:56:04.842013 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.842775 kubelet[2824]: I0120 23:56:04.842473 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9b72cdbf-b6bd-45ae-98ac-50d5aed18456-varrun\") pod \"csi-node-driver-5sc47\" (UID: \"9b72cdbf-b6bd-45ae-98ac-50d5aed18456\") " pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:04.842930 kubelet[2824]: E0120 23:56:04.842883 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.842930 kubelet[2824]: W0120 23:56:04.842908 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.842930 kubelet[2824]: E0120 23:56:04.842927 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.843726 kubelet[2824]: I0120 23:56:04.842946 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9b72cdbf-b6bd-45ae-98ac-50d5aed18456-kubelet-dir\") pod \"csi-node-driver-5sc47\" (UID: \"9b72cdbf-b6bd-45ae-98ac-50d5aed18456\") " pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:04.844248 kubelet[2824]: E0120 23:56:04.844182 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.844248 kubelet[2824]: W0120 23:56:04.844237 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.844248 kubelet[2824]: E0120 23:56:04.844261 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.844248 kubelet[2824]: I0120 23:56:04.844285 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9b72cdbf-b6bd-45ae-98ac-50d5aed18456-socket-dir\") pod \"csi-node-driver-5sc47\" (UID: \"9b72cdbf-b6bd-45ae-98ac-50d5aed18456\") " pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:04.844999 kubelet[2824]: E0120 23:56:04.844956 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.844999 kubelet[2824]: W0120 23:56:04.844974 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.845149 kubelet[2824]: E0120 23:56:04.845117 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.845284 kubelet[2824]: E0120 23:56:04.845263 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.845284 kubelet[2824]: W0120 23:56:04.845283 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.845437 kubelet[2824]: E0120 23:56:04.845301 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.845553 kubelet[2824]: E0120 23:56:04.845535 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.845583 kubelet[2824]: W0120 23:56:04.845553 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.845627 kubelet[2824]: E0120 23:56:04.845613 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.846015 kubelet[2824]: E0120 23:56:04.845998 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.846273 kubelet[2824]: W0120 23:56:04.846016 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.846273 kubelet[2824]: I0120 23:56:04.846259 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9b72cdbf-b6bd-45ae-98ac-50d5aed18456-registration-dir\") pod \"csi-node-driver-5sc47\" (UID: \"9b72cdbf-b6bd-45ae-98ac-50d5aed18456\") " pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:04.846617 kubelet[2824]: E0120 23:56:04.846029 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.847263 kubelet[2824]: E0120 23:56:04.847230 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.847263 kubelet[2824]: W0120 23:56:04.847250 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.847263 kubelet[2824]: E0120 23:56:04.847264 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.848321 kubelet[2824]: E0120 23:56:04.848285 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.848321 kubelet[2824]: W0120 23:56:04.848309 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.848321 kubelet[2824]: E0120 23:56:04.848331 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.848477 kubelet[2824]: I0120 23:56:04.848356 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qkx5\" (UniqueName: \"kubernetes.io/projected/9b72cdbf-b6bd-45ae-98ac-50d5aed18456-kube-api-access-5qkx5\") pod \"csi-node-driver-5sc47\" (UID: \"9b72cdbf-b6bd-45ae-98ac-50d5aed18456\") " pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:04.849256 kubelet[2824]: E0120 23:56:04.849190 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.849256 kubelet[2824]: W0120 23:56:04.849226 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.849256 kubelet[2824]: E0120 23:56:04.849246 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.849722 kubelet[2824]: E0120 23:56:04.849662 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.849722 kubelet[2824]: W0120 23:56:04.849680 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.849722 kubelet[2824]: E0120 23:56:04.849691 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.850145 kubelet[2824]: E0120 23:56:04.850125 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.850145 kubelet[2824]: W0120 23:56:04.850141 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.850251 kubelet[2824]: E0120 23:56:04.850153 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.851330 kubelet[2824]: E0120 23:56:04.851307 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.851330 kubelet[2824]: W0120 23:56:04.851324 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.851422 kubelet[2824]: E0120 23:56:04.851339 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.852327 kubelet[2824]: E0120 23:56:04.852188 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.852327 kubelet[2824]: W0120 23:56:04.852324 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.852426 kubelet[2824]: E0120 23:56:04.852343 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.853000 kubelet[2824]: E0120 23:56:04.852962 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.853000 kubelet[2824]: W0120 23:56:04.852980 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.853000 kubelet[2824]: E0120 23:56:04.852993 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.858507 containerd[1592]: time="2026-01-20T23:56:04.858460829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56545c98bd-rjb6f,Uid:16f9f37a-e7a0-406a-86cf-866e710be141,Namespace:calico-system,Attempt:0,} returns sandbox id \"4b65c820f08c2359b5159f87ab72f34ce103b94c5321f3d6685db2cff5bf0864\"" Jan 20 23:56:04.863192 containerd[1592]: time="2026-01-20T23:56:04.863149763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 23:56:04.868778 containerd[1592]: time="2026-01-20T23:56:04.868532105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwdhk,Uid:ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:04.899986 containerd[1592]: time="2026-01-20T23:56:04.899870387Z" level=info msg="connecting to shim 000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509" address="unix:///run/containerd/s/2ade32952f235bdce045f4185e85c89938852422b0cbfcd52f2bba74c3d8d21c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:04.936344 systemd[1]: Started cri-containerd-000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509.scope - libcontainer container 000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509. Jan 20 23:56:04.952000 audit: BPF prog-id=154 op=LOAD Jan 20 23:56:04.953000 audit: BPF prog-id=155 op=LOAD Jan 20 23:56:04.953000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.953000 audit: BPF prog-id=155 op=UNLOAD Jan 20 23:56:04.954603 kubelet[2824]: E0120 23:56:04.954513 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.954603 kubelet[2824]: W0120 23:56:04.954545 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.954603 kubelet[2824]: E0120 23:56:04.954568 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.953000 audit[3339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.954963 kubelet[2824]: E0120 23:56:04.954780 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.954963 kubelet[2824]: W0120 23:56:04.954787 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.954963 kubelet[2824]: E0120 23:56:04.954796 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.955163 kubelet[2824]: E0120 23:56:04.954996 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.955163 kubelet[2824]: W0120 23:56:04.955017 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.955163 kubelet[2824]: E0120 23:56:04.955066 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.953000 audit: BPF prog-id=156 op=LOAD Jan 20 23:56:04.953000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.954000 audit: BPF prog-id=157 op=LOAD Jan 20 23:56:04.954000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.954000 audit: BPF prog-id=157 op=UNLOAD Jan 20 23:56:04.954000 audit[3339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.954000 audit: BPF prog-id=156 op=UNLOAD Jan 20 23:56:04.954000 audit[3339]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.954000 audit: BPF prog-id=158 op=LOAD Jan 20 23:56:04.954000 audit[3339]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3328 pid=3339 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:04.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3030306366376630626639643564363639346636373764663832626662 Jan 20 23:56:04.956186 kubelet[2824]: E0120 23:56:04.956170 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.956279 kubelet[2824]: W0120 23:56:04.956185 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.956279 kubelet[2824]: E0120 23:56:04.956251 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.957100 kubelet[2824]: E0120 23:56:04.957079 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.957322 kubelet[2824]: W0120 23:56:04.957284 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.957387 kubelet[2824]: E0120 23:56:04.957359 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.957612 kubelet[2824]: E0120 23:56:04.957590 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.957612 kubelet[2824]: W0120 23:56:04.957600 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.957938 kubelet[2824]: E0120 23:56:04.957905 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.958200 kubelet[2824]: E0120 23:56:04.958174 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.958200 kubelet[2824]: W0120 23:56:04.958190 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.958752 kubelet[2824]: E0120 23:56:04.958361 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.958752 kubelet[2824]: E0120 23:56:04.958470 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.958752 kubelet[2824]: W0120 23:56:04.958480 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.958752 kubelet[2824]: E0120 23:56:04.958507 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.958752 kubelet[2824]: E0120 23:56:04.958757 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.958752 kubelet[2824]: W0120 23:56:04.958767 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.959584 kubelet[2824]: E0120 23:56:04.959144 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.959584 kubelet[2824]: W0120 23:56:04.959162 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.959584 kubelet[2824]: E0120 23:56:04.959169 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.959584 kubelet[2824]: E0120 23:56:04.959335 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.959584 kubelet[2824]: E0120 23:56:04.959414 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.959584 kubelet[2824]: W0120 23:56:04.959425 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.959584 kubelet[2824]: E0120 23:56:04.959519 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.959843 kubelet[2824]: E0120 23:56:04.959610 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.959843 kubelet[2824]: W0120 23:56:04.959617 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.959843 kubelet[2824]: E0120 23:56:04.959631 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.959843 kubelet[2824]: E0120 23:56:04.959766 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.959843 kubelet[2824]: W0120 23:56:04.959773 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.959843 kubelet[2824]: E0120 23:56:04.959780 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.960056 kubelet[2824]: E0120 23:56:04.959975 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.960056 kubelet[2824]: W0120 23:56:04.959996 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.960056 kubelet[2824]: E0120 23:56:04.960004 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.960580 kubelet[2824]: E0120 23:56:04.960194 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.960580 kubelet[2824]: W0120 23:56:04.960219 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.960580 kubelet[2824]: E0120 23:56:04.960362 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.960807 kubelet[2824]: E0120 23:56:04.960630 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.960807 kubelet[2824]: W0120 23:56:04.960641 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.960807 kubelet[2824]: E0120 23:56:04.960679 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.961430 kubelet[2824]: E0120 23:56:04.961405 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.961430 kubelet[2824]: W0120 23:56:04.961423 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.961682 kubelet[2824]: E0120 23:56:04.961613 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.961682 kubelet[2824]: W0120 23:56:04.961626 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.961682 kubelet[2824]: E0120 23:56:04.961631 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.961682 kubelet[2824]: E0120 23:56:04.961658 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.962163 kubelet[2824]: E0120 23:56:04.962141 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.962163 kubelet[2824]: W0120 23:56:04.962157 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.962317 kubelet[2824]: E0120 23:56:04.962250 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.962580 kubelet[2824]: E0120 23:56:04.962467 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.962580 kubelet[2824]: W0120 23:56:04.962477 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.962580 kubelet[2824]: E0120 23:56:04.962573 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.962806 kubelet[2824]: E0120 23:56:04.962686 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.962806 kubelet[2824]: W0120 23:56:04.962701 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.962806 kubelet[2824]: E0120 23:56:04.962718 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.962981 kubelet[2824]: E0120 23:56:04.962947 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.962981 kubelet[2824]: W0120 23:56:04.962958 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.963166 kubelet[2824]: E0120 23:56:04.963127 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.963221 kubelet[2824]: E0120 23:56:04.963176 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.963221 kubelet[2824]: W0120 23:56:04.963185 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.963400 kubelet[2824]: E0120 23:56:04.963377 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.963400 kubelet[2824]: W0120 23:56:04.963392 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.963400 kubelet[2824]: E0120 23:56:04.963403 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.964105 kubelet[2824]: E0120 23:56:04.963247 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.965108 kubelet[2824]: E0120 23:56:04.964400 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.965108 kubelet[2824]: W0120 23:56:04.964415 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.965108 kubelet[2824]: E0120 23:56:04.964439 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:04.979844 containerd[1592]: time="2026-01-20T23:56:04.979802951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nwdhk,Uid:ea7a1529-5bd0-4b6f-b527-92cbaa7c8ae5,Namespace:calico-system,Attempt:0,} returns sandbox id \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\"" Jan 20 23:56:04.986537 kubelet[2824]: E0120 23:56:04.986412 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:04.986537 kubelet[2824]: W0120 23:56:04.986448 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:04.986537 kubelet[2824]: E0120 23:56:04.986477 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:05.422000 audit[3393]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:05.422000 audit[3393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe31423f0 a2=0 a3=1 items=0 ppid=2925 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:05.422000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:05.428000 audit[3393]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3393 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:05.428000 audit[3393]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe31423f0 a2=0 a3=1 items=0 ppid=2925 pid=3393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:05.428000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:06.178111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375128360.mount: Deactivated successfully. Jan 20 23:56:06.454114 kubelet[2824]: E0120 23:56:06.453984 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:06.766442 containerd[1592]: time="2026-01-20T23:56:06.766374955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:06.767739 containerd[1592]: time="2026-01-20T23:56:06.767664128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:06.768969 containerd[1592]: time="2026-01-20T23:56:06.768657338Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:06.771419 containerd[1592]: time="2026-01-20T23:56:06.771367086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:06.772842 containerd[1592]: time="2026-01-20T23:56:06.772805780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.909614536s" Jan 20 23:56:06.772842 containerd[1592]: time="2026-01-20T23:56:06.772837781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 23:56:06.776075 containerd[1592]: time="2026-01-20T23:56:06.775489687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 23:56:06.788023 containerd[1592]: time="2026-01-20T23:56:06.787922494Z" level=info msg="CreateContainer within sandbox \"4b65c820f08c2359b5159f87ab72f34ce103b94c5321f3d6685db2cff5bf0864\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 23:56:06.797057 containerd[1592]: time="2026-01-20T23:56:06.795580891Z" level=info msg="Container 6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:06.811575 containerd[1592]: time="2026-01-20T23:56:06.811508293Z" level=info msg="CreateContainer within sandbox \"4b65c820f08c2359b5159f87ab72f34ce103b94c5321f3d6685db2cff5bf0864\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909\"" Jan 20 23:56:06.812991 containerd[1592]: time="2026-01-20T23:56:06.812940028Z" level=info msg="StartContainer for \"6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909\"" Jan 20 23:56:06.814539 containerd[1592]: time="2026-01-20T23:56:06.814503884Z" level=info msg="connecting to shim 6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909" address="unix:///run/containerd/s/6bf6b0969d69268b2a23f9a7f3d70403709b828b2fb63446a365a3f2a1f6153e" protocol=ttrpc version=3 Jan 20 23:56:06.839289 systemd[1]: Started cri-containerd-6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909.scope - libcontainer container 6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909. Jan 20 23:56:06.854614 kernel: kauditd_printk_skb: 70 callbacks suppressed Jan 20 23:56:06.854709 kernel: audit: type=1334 audit(1768953366.852:547): prog-id=159 op=LOAD Jan 20 23:56:06.852000 audit: BPF prog-id=159 op=LOAD Jan 20 23:56:06.855000 audit: BPF prog-id=160 op=LOAD Jan 20 23:56:06.861287 kernel: audit: type=1334 audit(1768953366.855:548): prog-id=160 op=LOAD Jan 20 23:56:06.861373 kernel: audit: type=1300 audit(1768953366.855:548): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.855000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.864106 kernel: audit: type=1327 audit(1768953366.855:548): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.867956 kernel: audit: type=1334 audit(1768953366.855:549): prog-id=160 op=UNLOAD Jan 20 23:56:06.868019 kernel: audit: type=1300 audit(1768953366.855:549): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.855000 audit: BPF prog-id=160 op=UNLOAD Jan 20 23:56:06.855000 audit[3404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.870916 kernel: audit: type=1327 audit(1768953366.855:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.855000 audit: BPF prog-id=161 op=LOAD Jan 20 23:56:06.855000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.874082 kernel: audit: type=1334 audit(1768953366.855:550): prog-id=161 op=LOAD Jan 20 23:56:06.874138 kernel: audit: type=1300 audit(1768953366.855:550): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.876570 kernel: audit: type=1327 audit(1768953366.855:550): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.855000 audit: BPF prog-id=162 op=LOAD Jan 20 23:56:06.855000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.857000 audit: BPF prog-id=162 op=UNLOAD Jan 20 23:56:06.857000 audit[3404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.857000 audit: BPF prog-id=161 op=UNLOAD Jan 20 23:56:06.857000 audit[3404]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.857000 audit: BPF prog-id=163 op=LOAD Jan 20 23:56:06.857000 audit[3404]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3237 pid=3404 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:06.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631313165366463336364656332656137646133333961303432353065 Jan 20 23:56:06.907792 containerd[1592]: time="2026-01-20T23:56:06.907744110Z" level=info msg="StartContainer for \"6111e6dc3cdec2ea7da339a04250eadf3146bf9a4a3fddb783c7c5413d6da909\" returns successfully" Jan 20 23:56:07.643708 kubelet[2824]: E0120 23:56:07.643669 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.643708 kubelet[2824]: W0120 23:56:07.643693 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.643708 kubelet[2824]: E0120 23:56:07.643711 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.644329 kubelet[2824]: E0120 23:56:07.643866 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.644329 kubelet[2824]: W0120 23:56:07.643874 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.644329 kubelet[2824]: E0120 23:56:07.643929 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.644329 kubelet[2824]: E0120 23:56:07.644069 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.644329 kubelet[2824]: W0120 23:56:07.644076 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.644329 kubelet[2824]: E0120 23:56:07.644084 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.644329 kubelet[2824]: E0120 23:56:07.644197 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.644329 kubelet[2824]: W0120 23:56:07.644203 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.644329 kubelet[2824]: E0120 23:56:07.644236 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.644670 kubelet[2824]: E0120 23:56:07.644469 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.644670 kubelet[2824]: W0120 23:56:07.644478 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.644670 kubelet[2824]: E0120 23:56:07.644488 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.644787 kubelet[2824]: E0120 23:56:07.644675 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.644787 kubelet[2824]: W0120 23:56:07.644684 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.644787 kubelet[2824]: E0120 23:56:07.644692 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.644896 kubelet[2824]: E0120 23:56:07.644828 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.644896 kubelet[2824]: W0120 23:56:07.644835 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.644896 kubelet[2824]: E0120 23:56:07.644843 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.645159 kubelet[2824]: E0120 23:56:07.644982 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.645159 kubelet[2824]: W0120 23:56:07.644989 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.645159 kubelet[2824]: E0120 23:56:07.644996 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.645159 kubelet[2824]: E0120 23:56:07.645134 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.645159 kubelet[2824]: W0120 23:56:07.645146 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.645159 kubelet[2824]: E0120 23:56:07.645156 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.645678 kubelet[2824]: E0120 23:56:07.645368 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.645678 kubelet[2824]: W0120 23:56:07.645379 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.645678 kubelet[2824]: E0120 23:56:07.645388 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.645678 kubelet[2824]: E0120 23:56:07.645517 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.645678 kubelet[2824]: W0120 23:56:07.645524 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.645678 kubelet[2824]: E0120 23:56:07.645531 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.645678 kubelet[2824]: E0120 23:56:07.645642 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.645678 kubelet[2824]: W0120 23:56:07.645648 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.645678 kubelet[2824]: E0120 23:56:07.645656 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.645933 kubelet[2824]: E0120 23:56:07.645919 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.645933 kubelet[2824]: W0120 23:56:07.645933 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.645998 kubelet[2824]: E0120 23:56:07.645942 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.646156 kubelet[2824]: E0120 23:56:07.646144 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.646191 kubelet[2824]: W0120 23:56:07.646155 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.646191 kubelet[2824]: E0120 23:56:07.646177 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.646354 kubelet[2824]: E0120 23:56:07.646343 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.646354 kubelet[2824]: W0120 23:56:07.646354 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.646402 kubelet[2824]: E0120 23:56:07.646363 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.677063 kubelet[2824]: E0120 23:56:07.676930 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.677063 kubelet[2824]: W0120 23:56:07.676960 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.677063 kubelet[2824]: E0120 23:56:07.676980 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.677674 kubelet[2824]: E0120 23:56:07.677573 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.677674 kubelet[2824]: W0120 23:56:07.677589 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.677674 kubelet[2824]: E0120 23:56:07.677612 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.677998 kubelet[2824]: E0120 23:56:07.677985 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.678093 kubelet[2824]: W0120 23:56:07.678077 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.678176 kubelet[2824]: E0120 23:56:07.678162 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.678539 kubelet[2824]: E0120 23:56:07.678485 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.678539 kubelet[2824]: W0120 23:56:07.678497 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.678619 kubelet[2824]: E0120 23:56:07.678534 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.678895 kubelet[2824]: E0120 23:56:07.678840 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.678895 kubelet[2824]: W0120 23:56:07.678851 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.678895 kubelet[2824]: E0120 23:56:07.678886 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.679269 kubelet[2824]: E0120 23:56:07.679200 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.679269 kubelet[2824]: W0120 23:56:07.679213 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.679269 kubelet[2824]: E0120 23:56:07.679258 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.679686 kubelet[2824]: E0120 23:56:07.679587 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.679686 kubelet[2824]: W0120 23:56:07.679602 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.679686 kubelet[2824]: E0120 23:56:07.679639 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.679965 kubelet[2824]: E0120 23:56:07.679952 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.680027 kubelet[2824]: W0120 23:56:07.680016 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.680164 kubelet[2824]: E0120 23:56:07.680137 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.680420 kubelet[2824]: E0120 23:56:07.680405 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.680490 kubelet[2824]: W0120 23:56:07.680421 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.680490 kubelet[2824]: E0120 23:56:07.680439 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.680611 kubelet[2824]: E0120 23:56:07.680600 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.680611 kubelet[2824]: W0120 23:56:07.680611 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.680717 kubelet[2824]: E0120 23:56:07.680659 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.680759 kubelet[2824]: E0120 23:56:07.680744 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.680759 kubelet[2824]: W0120 23:56:07.680757 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.680861 kubelet[2824]: E0120 23:56:07.680780 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.680913 kubelet[2824]: E0120 23:56:07.680902 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.680913 kubelet[2824]: W0120 23:56:07.680912 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.680971 kubelet[2824]: E0120 23:56:07.680928 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.681392 kubelet[2824]: E0120 23:56:07.681195 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.681392 kubelet[2824]: W0120 23:56:07.681208 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.681392 kubelet[2824]: E0120 23:56:07.681236 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.681392 kubelet[2824]: E0120 23:56:07.681384 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.681392 kubelet[2824]: W0120 23:56:07.681393 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.681544 kubelet[2824]: E0120 23:56:07.681414 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.681570 kubelet[2824]: E0120 23:56:07.681552 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.681570 kubelet[2824]: W0120 23:56:07.681559 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.681570 kubelet[2824]: E0120 23:56:07.681567 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.681699 kubelet[2824]: E0120 23:56:07.681689 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.681699 kubelet[2824]: W0120 23:56:07.681699 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.681753 kubelet[2824]: E0120 23:56:07.681714 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.681987 kubelet[2824]: E0120 23:56:07.681966 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.682019 kubelet[2824]: W0120 23:56:07.681991 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.682019 kubelet[2824]: E0120 23:56:07.682010 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:07.682515 kubelet[2824]: E0120 23:56:07.682498 2824 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:56:07.682554 kubelet[2824]: W0120 23:56:07.682516 2824 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:56:07.682554 kubelet[2824]: E0120 23:56:07.682531 2824 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:56:08.171103 containerd[1592]: time="2026-01-20T23:56:08.170213484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:08.172259 containerd[1592]: time="2026-01-20T23:56:08.172185102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:08.172259 containerd[1592]: time="2026-01-20T23:56:08.172241103Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:08.175977 containerd[1592]: time="2026-01-20T23:56:08.175909415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:08.177543 containerd[1592]: time="2026-01-20T23:56:08.176504981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.400981612s" Jan 20 23:56:08.177543 containerd[1592]: time="2026-01-20T23:56:08.177519350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 23:56:08.182126 containerd[1592]: time="2026-01-20T23:56:08.182086670Z" level=info msg="CreateContainer within sandbox \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 23:56:08.194199 containerd[1592]: time="2026-01-20T23:56:08.193496412Z" level=info msg="Container 7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:08.202941 containerd[1592]: time="2026-01-20T23:56:08.202876336Z" level=info msg="CreateContainer within sandbox \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335\"" Jan 20 23:56:08.204147 containerd[1592]: time="2026-01-20T23:56:08.204066787Z" level=info msg="StartContainer for \"7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335\"" Jan 20 23:56:08.207248 containerd[1592]: time="2026-01-20T23:56:08.207163134Z" level=info msg="connecting to shim 7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335" address="unix:///run/containerd/s/2ade32952f235bdce045f4185e85c89938852422b0cbfcd52f2bba74c3d8d21c" protocol=ttrpc version=3 Jan 20 23:56:08.232455 systemd[1]: Started cri-containerd-7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335.scope - libcontainer container 7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335. Jan 20 23:56:08.280000 audit: BPF prog-id=164 op=LOAD Jan 20 23:56:08.280000 audit[3481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3328 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:08.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656362623334383864626161616637396630653861353631386336 Jan 20 23:56:08.280000 audit: BPF prog-id=165 op=LOAD Jan 20 23:56:08.280000 audit[3481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3328 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:08.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656362623334383864626161616637396630653861353631386336 Jan 20 23:56:08.280000 audit: BPF prog-id=165 op=UNLOAD Jan 20 23:56:08.280000 audit[3481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:08.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656362623334383864626161616637396630653861353631386336 Jan 20 23:56:08.280000 audit: BPF prog-id=164 op=UNLOAD Jan 20 23:56:08.280000 audit[3481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:08.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656362623334383864626161616637396630653861353631386336 Jan 20 23:56:08.280000 audit: BPF prog-id=166 op=LOAD Jan 20 23:56:08.280000 audit[3481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3328 pid=3481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:08.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765656362623334383864626161616637396630653861353631386336 Jan 20 23:56:08.302966 containerd[1592]: time="2026-01-20T23:56:08.302907868Z" level=info msg="StartContainer for \"7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335\" returns successfully" Jan 20 23:56:08.320966 systemd[1]: cri-containerd-7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335.scope: Deactivated successfully. Jan 20 23:56:08.325000 audit: BPF prog-id=166 op=UNLOAD Jan 20 23:56:08.327467 containerd[1592]: time="2026-01-20T23:56:08.327434567Z" level=info msg="received container exit event container_id:\"7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335\" id:\"7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335\" pid:3497 exited_at:{seconds:1768953368 nanos:326018115}" Jan 20 23:56:08.349160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7eecbb3488dbaaaf79f0e8a5618c6e28a2c487ff9eb886b9aaee2e2eaac2d335-rootfs.mount: Deactivated successfully. Jan 20 23:56:08.454479 kubelet[2824]: E0120 23:56:08.454340 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:08.596002 kubelet[2824]: I0120 23:56:08.595952 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 23:56:08.599479 containerd[1592]: time="2026-01-20T23:56:08.599383274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 23:56:08.618064 kubelet[2824]: I0120 23:56:08.617200 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56545c98bd-rjb6f" podStartSLOduration=2.704417383 podStartE2EDuration="4.617177553s" podCreationTimestamp="2026-01-20 23:56:04 +0000 UTC" firstStartedPulling="2026-01-20 23:56:04.862184392 +0000 UTC m=+29.531937807" lastFinishedPulling="2026-01-20 23:56:06.774944562 +0000 UTC m=+31.444697977" observedRunningTime="2026-01-20 23:56:07.608659001 +0000 UTC m=+32.278412416" watchObservedRunningTime="2026-01-20 23:56:08.617177553 +0000 UTC m=+33.286931008" Jan 20 23:56:10.453747 kubelet[2824]: E0120 23:56:10.453707 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:11.199817 containerd[1592]: time="2026-01-20T23:56:11.199733727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:11.202693 containerd[1592]: time="2026-01-20T23:56:11.202592708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 20 23:56:11.203705 containerd[1592]: time="2026-01-20T23:56:11.203640036Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:11.206957 containerd[1592]: time="2026-01-20T23:56:11.206868899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:11.207183 containerd[1592]: time="2026-01-20T23:56:11.207137661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.607670506s" Jan 20 23:56:11.207183 containerd[1592]: time="2026-01-20T23:56:11.207172062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 23:56:11.212133 containerd[1592]: time="2026-01-20T23:56:11.212026337Z" level=info msg="CreateContainer within sandbox \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 23:56:11.226818 containerd[1592]: time="2026-01-20T23:56:11.222083011Z" level=info msg="Container 74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:11.248516 containerd[1592]: time="2026-01-20T23:56:11.248437645Z" level=info msg="CreateContainer within sandbox \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a\"" Jan 20 23:56:11.250133 containerd[1592]: time="2026-01-20T23:56:11.250088657Z" level=info msg="StartContainer for \"74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a\"" Jan 20 23:56:11.254152 containerd[1592]: time="2026-01-20T23:56:11.254086847Z" level=info msg="connecting to shim 74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a" address="unix:///run/containerd/s/2ade32952f235bdce045f4185e85c89938852422b0cbfcd52f2bba74c3d8d21c" protocol=ttrpc version=3 Jan 20 23:56:11.281303 systemd[1]: Started cri-containerd-74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a.scope - libcontainer container 74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a. Jan 20 23:56:11.343000 audit: BPF prog-id=167 op=LOAD Jan 20 23:56:11.343000 audit[3543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=3328 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:11.343000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734643739343236613535666364623166386637316438363330653964 Jan 20 23:56:11.344000 audit: BPF prog-id=168 op=LOAD Jan 20 23:56:11.344000 audit[3543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=3328 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734643739343236613535666364623166386637316438363330653964 Jan 20 23:56:11.344000 audit: BPF prog-id=168 op=UNLOAD Jan 20 23:56:11.344000 audit[3543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734643739343236613535666364623166386637316438363330653964 Jan 20 23:56:11.344000 audit: BPF prog-id=167 op=UNLOAD Jan 20 23:56:11.344000 audit[3543]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734643739343236613535666364623166386637316438363330653964 Jan 20 23:56:11.344000 audit: BPF prog-id=169 op=LOAD Jan 20 23:56:11.344000 audit[3543]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=3328 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:11.344000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3734643739343236613535666364623166386637316438363330653964 Jan 20 23:56:11.370373 containerd[1592]: time="2026-01-20T23:56:11.370334581Z" level=info msg="StartContainer for \"74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a\" returns successfully" Jan 20 23:56:11.920165 containerd[1592]: time="2026-01-20T23:56:11.920114664Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 23:56:11.924965 systemd[1]: cri-containerd-74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a.scope: Deactivated successfully. Jan 20 23:56:11.925467 systemd[1]: cri-containerd-74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a.scope: Consumed 527ms CPU time, 188.9M memory peak, 165.9M written to disk. Jan 20 23:56:11.927476 containerd[1592]: time="2026-01-20T23:56:11.927122955Z" level=info msg="received container exit event container_id:\"74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a\" id:\"74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a\" pid:3555 exited_at:{seconds:1768953371 nanos:926792313}" Jan 20 23:56:11.930169 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 20 23:56:11.930271 kernel: audit: type=1334 audit(1768953371.928:566): prog-id=169 op=UNLOAD Jan 20 23:56:11.928000 audit: BPF prog-id=169 op=UNLOAD Jan 20 23:56:11.961265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74d79426a55fcdb1f8f71d8630e9dd496868cccb1a7bf2430c648880e7eb6c5a-rootfs.mount: Deactivated successfully. Jan 20 23:56:12.028075 kubelet[2824]: I0120 23:56:12.027730 2824 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 23:56:12.080090 systemd[1]: Created slice kubepods-besteffort-pod867a1dc3_f4d9_4cba_a9b8_47adcf051929.slice - libcontainer container kubepods-besteffort-pod867a1dc3_f4d9_4cba_a9b8_47adcf051929.slice. Jan 20 23:56:12.089113 kubelet[2824]: W0120 23:56:12.089080 2824 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4547-0-0-n-f640cc67e1" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object Jan 20 23:56:12.089958 kubelet[2824]: I0120 23:56:12.089842 2824 status_manager.go:890] "Failed to get status for pod" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" err="pods \"calico-apiserver-799c555658-fmt6r\" is forbidden: User \"system:node:ci-4547-0-0-n-f640cc67e1\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object" Jan 20 23:56:12.091362 kubelet[2824]: E0120 23:56:12.090131 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4547-0-0-n-f640cc67e1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object" logger="UnhandledError" Jan 20 23:56:12.091362 kubelet[2824]: W0120 23:56:12.089451 2824 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4547-0-0-n-f640cc67e1" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object Jan 20 23:56:12.091362 kubelet[2824]: E0120 23:56:12.090174 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4547-0-0-n-f640cc67e1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object" logger="UnhandledError" Jan 20 23:56:12.091080 systemd[1]: Created slice kubepods-besteffort-podc7a5ed03_84ad_40f0_b474_3e36847dc1ab.slice - libcontainer container kubepods-besteffort-podc7a5ed03_84ad_40f0_b474_3e36847dc1ab.slice. Jan 20 23:56:12.095071 kubelet[2824]: W0120 23:56:12.095010 2824 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4547-0-0-n-f640cc67e1" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object Jan 20 23:56:12.098419 kubelet[2824]: E0120 23:56:12.098352 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4547-0-0-n-f640cc67e1\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object" logger="UnhandledError" Jan 20 23:56:12.098419 kubelet[2824]: W0120 23:56:12.097760 2824 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:ci-4547-0-0-n-f640cc67e1" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object Jan 20 23:56:12.098419 kubelet[2824]: E0120 23:56:12.098394 2824 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4547-0-0-n-f640cc67e1\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4547-0-0-n-f640cc67e1' and this object" logger="UnhandledError" Jan 20 23:56:12.112759 kubelet[2824]: I0120 23:56:12.112578 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp6pl\" (UniqueName: \"kubernetes.io/projected/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-kube-api-access-hp6pl\") pod \"whisker-7fff75b568-22gzl\" (UID: \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\") " pod="calico-system/whisker-7fff75b568-22gzl" Jan 20 23:56:12.112759 kubelet[2824]: I0120 23:56:12.112714 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/867a1dc3-f4d9-4cba-a9b8-47adcf051929-calico-apiserver-certs\") pod \"calico-apiserver-799c555658-fmt6r\" (UID: \"867a1dc3-f4d9-4cba-a9b8-47adcf051929\") " pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" Jan 20 23:56:12.114061 kubelet[2824]: I0120 23:56:12.113283 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgrbz\" (UniqueName: \"kubernetes.io/projected/867a1dc3-f4d9-4cba-a9b8-47adcf051929-kube-api-access-wgrbz\") pod \"calico-apiserver-799c555658-fmt6r\" (UID: \"867a1dc3-f4d9-4cba-a9b8-47adcf051929\") " pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" Jan 20 23:56:12.114335 kubelet[2824]: I0120 23:56:12.113331 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-backend-key-pair\") pod \"whisker-7fff75b568-22gzl\" (UID: \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\") " pod="calico-system/whisker-7fff75b568-22gzl" Jan 20 23:56:12.114475 kubelet[2824]: I0120 23:56:12.114423 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-ca-bundle\") pod \"whisker-7fff75b568-22gzl\" (UID: \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\") " pod="calico-system/whisker-7fff75b568-22gzl" Jan 20 23:56:12.115390 systemd[1]: Created slice kubepods-besteffort-pod3f49a160_e207_435e_86a4_138a2a624ffb.slice - libcontainer container kubepods-besteffort-pod3f49a160_e207_435e_86a4_138a2a624ffb.slice. Jan 20 23:56:12.125368 systemd[1]: Created slice kubepods-besteffort-poddddef414_cca7_4fb7_84c5_239896cb0ee3.slice - libcontainer container kubepods-besteffort-poddddef414_cca7_4fb7_84c5_239896cb0ee3.slice. Jan 20 23:56:12.136378 systemd[1]: Created slice kubepods-besteffort-pod0ab4b60b_a35a_4fcf_b5ca_07bcd71284a0.slice - libcontainer container kubepods-besteffort-pod0ab4b60b_a35a_4fcf_b5ca_07bcd71284a0.slice. Jan 20 23:56:12.145643 systemd[1]: Created slice kubepods-burstable-pod81e80bc5_5484_4fcb_9332_a2cfbe9d8655.slice - libcontainer container kubepods-burstable-pod81e80bc5_5484_4fcb_9332_a2cfbe9d8655.slice. Jan 20 23:56:12.157456 systemd[1]: Created slice kubepods-burstable-pod1dae76fe_b281_4e92_8149_0d08a51ef82b.slice - libcontainer container kubepods-burstable-pod1dae76fe_b281_4e92_8149_0d08a51ef82b.slice. Jan 20 23:56:12.215379 kubelet[2824]: I0120 23:56:12.215330 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0-config\") pod \"goldmane-666569f655-wkbbm\" (UID: \"0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0\") " pod="calico-system/goldmane-666569f655-wkbbm" Jan 20 23:56:12.215379 kubelet[2824]: I0120 23:56:12.215380 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0-goldmane-key-pair\") pod \"goldmane-666569f655-wkbbm\" (UID: \"0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0\") " pod="calico-system/goldmane-666569f655-wkbbm" Jan 20 23:56:12.215640 kubelet[2824]: I0120 23:56:12.215410 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1dae76fe-b281-4e92-8149-0d08a51ef82b-config-volume\") pod \"coredns-668d6bf9bc-hqqpq\" (UID: \"1dae76fe-b281-4e92-8149-0d08a51ef82b\") " pod="kube-system/coredns-668d6bf9bc-hqqpq" Jan 20 23:56:12.215640 kubelet[2824]: I0120 23:56:12.215429 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhtz\" (UniqueName: \"kubernetes.io/projected/3f49a160-e207-435e-86a4-138a2a624ffb-kube-api-access-wfhtz\") pod \"calico-apiserver-799c555658-fhgzx\" (UID: \"3f49a160-e207-435e-86a4-138a2a624ffb\") " pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" Jan 20 23:56:12.215640 kubelet[2824]: I0120 23:56:12.215456 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjjwf\" (UniqueName: \"kubernetes.io/projected/81e80bc5-5484-4fcb-9332-a2cfbe9d8655-kube-api-access-qjjwf\") pod \"coredns-668d6bf9bc-9nftp\" (UID: \"81e80bc5-5484-4fcb-9332-a2cfbe9d8655\") " pod="kube-system/coredns-668d6bf9bc-9nftp" Jan 20 23:56:12.215961 kubelet[2824]: I0120 23:56:12.215816 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dddef414-cca7-4fb7-84c5-239896cb0ee3-tigera-ca-bundle\") pod \"calico-kube-controllers-5446b598c6-knjcl\" (UID: \"dddef414-cca7-4fb7-84c5-239896cb0ee3\") " pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" Jan 20 23:56:12.215961 kubelet[2824]: I0120 23:56:12.215892 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0-goldmane-ca-bundle\") pod \"goldmane-666569f655-wkbbm\" (UID: \"0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0\") " pod="calico-system/goldmane-666569f655-wkbbm" Jan 20 23:56:12.215961 kubelet[2824]: I0120 23:56:12.215912 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn595\" (UniqueName: \"kubernetes.io/projected/dddef414-cca7-4fb7-84c5-239896cb0ee3-kube-api-access-fn595\") pod \"calico-kube-controllers-5446b598c6-knjcl\" (UID: \"dddef414-cca7-4fb7-84c5-239896cb0ee3\") " pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" Jan 20 23:56:12.216276 kubelet[2824]: I0120 23:56:12.216128 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3f49a160-e207-435e-86a4-138a2a624ffb-calico-apiserver-certs\") pod \"calico-apiserver-799c555658-fhgzx\" (UID: \"3f49a160-e207-435e-86a4-138a2a624ffb\") " pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" Jan 20 23:56:12.216276 kubelet[2824]: I0120 23:56:12.216170 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81e80bc5-5484-4fcb-9332-a2cfbe9d8655-config-volume\") pod \"coredns-668d6bf9bc-9nftp\" (UID: \"81e80bc5-5484-4fcb-9332-a2cfbe9d8655\") " pod="kube-system/coredns-668d6bf9bc-9nftp" Jan 20 23:56:12.216276 kubelet[2824]: I0120 23:56:12.216201 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lx5d\" (UniqueName: \"kubernetes.io/projected/1dae76fe-b281-4e92-8149-0d08a51ef82b-kube-api-access-2lx5d\") pod \"coredns-668d6bf9bc-hqqpq\" (UID: \"1dae76fe-b281-4e92-8149-0d08a51ef82b\") " pod="kube-system/coredns-668d6bf9bc-hqqpq" Jan 20 23:56:12.216276 kubelet[2824]: I0120 23:56:12.216218 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qdtv\" (UniqueName: \"kubernetes.io/projected/0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0-kube-api-access-7qdtv\") pod \"goldmane-666569f655-wkbbm\" (UID: \"0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0\") " pod="calico-system/goldmane-666569f655-wkbbm" Jan 20 23:56:12.432166 containerd[1592]: time="2026-01-20T23:56:12.432108991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5446b598c6-knjcl,Uid:dddef414-cca7-4fb7-84c5-239896cb0ee3,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:12.443124 containerd[1592]: time="2026-01-20T23:56:12.443031947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wkbbm,Uid:0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:12.455158 containerd[1592]: time="2026-01-20T23:56:12.454578786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9nftp,Uid:81e80bc5-5484-4fcb-9332-a2cfbe9d8655,Namespace:kube-system,Attempt:0,}" Jan 20 23:56:12.463124 systemd[1]: Created slice kubepods-besteffort-pod9b72cdbf_b6bd_45ae_98ac_50d5aed18456.slice - libcontainer container kubepods-besteffort-pod9b72cdbf_b6bd_45ae_98ac_50d5aed18456.slice. Jan 20 23:56:12.468270 containerd[1592]: time="2026-01-20T23:56:12.468114679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqqpq,Uid:1dae76fe-b281-4e92-8149-0d08a51ef82b,Namespace:kube-system,Attempt:0,}" Jan 20 23:56:12.472130 containerd[1592]: time="2026-01-20T23:56:12.471480303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5sc47,Uid:9b72cdbf-b6bd-45ae-98ac-50d5aed18456,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:12.580221 containerd[1592]: time="2026-01-20T23:56:12.580164412Z" level=error msg="Failed to destroy network for sandbox \"c8bee105ce2b98303b4c75dbdf10b1c82e80f7b2816bbf3f8dc302263e48f0f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.585530 containerd[1592]: time="2026-01-20T23:56:12.585022405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5446b598c6-knjcl,Uid:dddef414-cca7-4fb7-84c5-239896cb0ee3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bee105ce2b98303b4c75dbdf10b1c82e80f7b2816bbf3f8dc302263e48f0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.585866 kubelet[2824]: E0120 23:56:12.585824 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bee105ce2b98303b4c75dbdf10b1c82e80f7b2816bbf3f8dc302263e48f0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.586380 kubelet[2824]: E0120 23:56:12.585988 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bee105ce2b98303b4c75dbdf10b1c82e80f7b2816bbf3f8dc302263e48f0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" Jan 20 23:56:12.586380 kubelet[2824]: E0120 23:56:12.586014 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8bee105ce2b98303b4c75dbdf10b1c82e80f7b2816bbf3f8dc302263e48f0f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" Jan 20 23:56:12.586380 kubelet[2824]: E0120 23:56:12.586077 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8bee105ce2b98303b4c75dbdf10b1c82e80f7b2816bbf3f8dc302263e48f0f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:56:12.599217 containerd[1592]: time="2026-01-20T23:56:12.599169023Z" level=error msg="Failed to destroy network for sandbox \"2ebf0dbd0573ed5f8e622ab4e9a911282bbf4a75f28bbf27a8a6d2dad196f368\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.602972 containerd[1592]: time="2026-01-20T23:56:12.602857328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wkbbm,Uid:0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ebf0dbd0573ed5f8e622ab4e9a911282bbf4a75f28bbf27a8a6d2dad196f368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.604151 kubelet[2824]: E0120 23:56:12.603303 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ebf0dbd0573ed5f8e622ab4e9a911282bbf4a75f28bbf27a8a6d2dad196f368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.604151 kubelet[2824]: E0120 23:56:12.603363 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ebf0dbd0573ed5f8e622ab4e9a911282bbf4a75f28bbf27a8a6d2dad196f368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wkbbm" Jan 20 23:56:12.604151 kubelet[2824]: E0120 23:56:12.603382 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ebf0dbd0573ed5f8e622ab4e9a911282bbf4a75f28bbf27a8a6d2dad196f368\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wkbbm" Jan 20 23:56:12.604658 kubelet[2824]: E0120 23:56:12.603428 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ebf0dbd0573ed5f8e622ab4e9a911282bbf4a75f28bbf27a8a6d2dad196f368\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:56:12.617649 containerd[1592]: time="2026-01-20T23:56:12.617506589Z" level=error msg="Failed to destroy network for sandbox \"a0b090b572bfd664373f850d846ea3bec5c770de05f354d1d438bd9ca7051160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.622271 containerd[1592]: time="2026-01-20T23:56:12.622177461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9nftp,Uid:81e80bc5-5484-4fcb-9332-a2cfbe9d8655,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b090b572bfd664373f850d846ea3bec5c770de05f354d1d438bd9ca7051160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.622633 containerd[1592]: time="2026-01-20T23:56:12.622395063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 23:56:12.623198 kubelet[2824]: E0120 23:56:12.623154 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b090b572bfd664373f850d846ea3bec5c770de05f354d1d438bd9ca7051160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.623422 kubelet[2824]: E0120 23:56:12.623207 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b090b572bfd664373f850d846ea3bec5c770de05f354d1d438bd9ca7051160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9nftp" Jan 20 23:56:12.623422 kubelet[2824]: E0120 23:56:12.623227 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0b090b572bfd664373f850d846ea3bec5c770de05f354d1d438bd9ca7051160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9nftp" Jan 20 23:56:12.623422 kubelet[2824]: E0120 23:56:12.623310 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9nftp_kube-system(81e80bc5-5484-4fcb-9332-a2cfbe9d8655)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9nftp_kube-system(81e80bc5-5484-4fcb-9332-a2cfbe9d8655)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0b090b572bfd664373f850d846ea3bec5c770de05f354d1d438bd9ca7051160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9nftp" podUID="81e80bc5-5484-4fcb-9332-a2cfbe9d8655" Jan 20 23:56:12.643908 containerd[1592]: time="2026-01-20T23:56:12.643189166Z" level=error msg="Failed to destroy network for sandbox \"ae89ab8a53a94907c87a51e716ed64344a5985510356a9cfc4278b3920b9ee01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.647321 containerd[1592]: time="2026-01-20T23:56:12.647154074Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5sc47,Uid:9b72cdbf-b6bd-45ae-98ac-50d5aed18456,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae89ab8a53a94907c87a51e716ed64344a5985510356a9cfc4278b3920b9ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.648509 kubelet[2824]: E0120 23:56:12.647865 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae89ab8a53a94907c87a51e716ed64344a5985510356a9cfc4278b3920b9ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.648509 kubelet[2824]: E0120 23:56:12.647921 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae89ab8a53a94907c87a51e716ed64344a5985510356a9cfc4278b3920b9ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:12.648509 kubelet[2824]: E0120 23:56:12.647946 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae89ab8a53a94907c87a51e716ed64344a5985510356a9cfc4278b3920b9ee01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5sc47" Jan 20 23:56:12.648784 kubelet[2824]: E0120 23:56:12.647986 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae89ab8a53a94907c87a51e716ed64344a5985510356a9cfc4278b3920b9ee01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:12.660480 containerd[1592]: time="2026-01-20T23:56:12.660414085Z" level=error msg="Failed to destroy network for sandbox \"7dff09e0121154d7965b4fb9fc52cfa9d4871ed86baace3529b9d37de325cd32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.663686 containerd[1592]: time="2026-01-20T23:56:12.663581827Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqqpq,Uid:1dae76fe-b281-4e92-8149-0d08a51ef82b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dff09e0121154d7965b4fb9fc52cfa9d4871ed86baace3529b9d37de325cd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.664076 kubelet[2824]: E0120 23:56:12.663934 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dff09e0121154d7965b4fb9fc52cfa9d4871ed86baace3529b9d37de325cd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:12.664208 kubelet[2824]: E0120 23:56:12.664172 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dff09e0121154d7965b4fb9fc52cfa9d4871ed86baace3529b9d37de325cd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqqpq" Jan 20 23:56:12.664820 kubelet[2824]: E0120 23:56:12.664335 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7dff09e0121154d7965b4fb9fc52cfa9d4871ed86baace3529b9d37de325cd32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hqqpq" Jan 20 23:56:12.664820 kubelet[2824]: E0120 23:56:12.664399 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hqqpq_kube-system(1dae76fe-b281-4e92-8149-0d08a51ef82b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hqqpq_kube-system(1dae76fe-b281-4e92-8149-0d08a51ef82b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7dff09e0121154d7965b4fb9fc52cfa9d4871ed86baace3529b9d37de325cd32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hqqpq" podUID="1dae76fe-b281-4e92-8149-0d08a51ef82b" Jan 20 23:56:13.217098 kubelet[2824]: E0120 23:56:13.216851 2824 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 20 23:56:13.217098 kubelet[2824]: E0120 23:56:13.216975 2824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-ca-bundle podName:c7a5ed03-84ad-40f0-b474-3e36847dc1ab nodeName:}" failed. No retries permitted until 2026-01-20 23:56:13.716946948 +0000 UTC m=+38.386700363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-ca-bundle") pod "whisker-7fff75b568-22gzl" (UID: "c7a5ed03-84ad-40f0-b474-3e36847dc1ab") : failed to sync configmap cache: timed out waiting for the condition Jan 20 23:56:13.218214 kubelet[2824]: E0120 23:56:13.217763 2824 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 23:56:13.218214 kubelet[2824]: E0120 23:56:13.217850 2824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/867a1dc3-f4d9-4cba-a9b8-47adcf051929-calico-apiserver-certs podName:867a1dc3-f4d9-4cba-a9b8-47adcf051929 nodeName:}" failed. No retries permitted until 2026-01-20 23:56:13.717830634 +0000 UTC m=+38.387584049 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/867a1dc3-f4d9-4cba-a9b8-47adcf051929-calico-apiserver-certs") pod "calico-apiserver-799c555658-fmt6r" (UID: "867a1dc3-f4d9-4cba-a9b8-47adcf051929") : failed to sync secret cache: timed out waiting for the condition Jan 20 23:56:13.218214 kubelet[2824]: E0120 23:56:13.217880 2824 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jan 20 23:56:13.218214 kubelet[2824]: E0120 23:56:13.217914 2824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-backend-key-pair podName:c7a5ed03-84ad-40f0-b474-3e36847dc1ab nodeName:}" failed. No retries permitted until 2026-01-20 23:56:13.717902274 +0000 UTC m=+38.387655689 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-backend-key-pair") pod "whisker-7fff75b568-22gzl" (UID: "c7a5ed03-84ad-40f0-b474-3e36847dc1ab") : failed to sync secret cache: timed out waiting for the condition Jan 20 23:56:13.321958 kubelet[2824]: E0120 23:56:13.321490 2824 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jan 20 23:56:13.321958 kubelet[2824]: E0120 23:56:13.321624 2824 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3f49a160-e207-435e-86a4-138a2a624ffb-calico-apiserver-certs podName:3f49a160-e207-435e-86a4-138a2a624ffb nodeName:}" failed. No retries permitted until 2026-01-20 23:56:13.821594904 +0000 UTC m=+38.491348359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3f49a160-e207-435e-86a4-138a2a624ffb-calico-apiserver-certs") pod "calico-apiserver-799c555658-fhgzx" (UID: "3f49a160-e207-435e-86a4-138a2a624ffb") : failed to sync secret cache: timed out waiting for the condition Jan 20 23:56:13.891382 containerd[1592]: time="2026-01-20T23:56:13.891167665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fmt6r,Uid:867a1dc3-f4d9-4cba-a9b8-47adcf051929,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:56:13.906754 containerd[1592]: time="2026-01-20T23:56:13.906708565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fff75b568-22gzl,Uid:c7a5ed03-84ad-40f0-b474-3e36847dc1ab,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:13.925182 containerd[1592]: time="2026-01-20T23:56:13.924960563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fhgzx,Uid:3f49a160-e207-435e-86a4-138a2a624ffb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:56:14.000576 containerd[1592]: time="2026-01-20T23:56:14.000479931Z" level=error msg="Failed to destroy network for sandbox \"4b2a02131ab8762ebc14992f6fee66e33bfb0fa47c024713497307f2ce5fc137\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.009455 containerd[1592]: time="2026-01-20T23:56:14.009398106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fff75b568-22gzl,Uid:c7a5ed03-84ad-40f0-b474-3e36847dc1ab,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2a02131ab8762ebc14992f6fee66e33bfb0fa47c024713497307f2ce5fc137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.010042 kubelet[2824]: E0120 23:56:14.009955 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2a02131ab8762ebc14992f6fee66e33bfb0fa47c024713497307f2ce5fc137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.010479 kubelet[2824]: E0120 23:56:14.010015 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2a02131ab8762ebc14992f6fee66e33bfb0fa47c024713497307f2ce5fc137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fff75b568-22gzl" Jan 20 23:56:14.010479 kubelet[2824]: E0120 23:56:14.010067 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b2a02131ab8762ebc14992f6fee66e33bfb0fa47c024713497307f2ce5fc137\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fff75b568-22gzl" Jan 20 23:56:14.010479 kubelet[2824]: E0120 23:56:14.010167 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fff75b568-22gzl_calico-system(c7a5ed03-84ad-40f0-b474-3e36847dc1ab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fff75b568-22gzl_calico-system(c7a5ed03-84ad-40f0-b474-3e36847dc1ab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b2a02131ab8762ebc14992f6fee66e33bfb0fa47c024713497307f2ce5fc137\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fff75b568-22gzl" podUID="c7a5ed03-84ad-40f0-b474-3e36847dc1ab" Jan 20 23:56:14.033771 containerd[1592]: time="2026-01-20T23:56:14.033641092Z" level=error msg="Failed to destroy network for sandbox \"11bb86e6c7f8c976b01502c9cca19f632e8b739c62cdbaedd55f5a768f2a40e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.038080 containerd[1592]: time="2026-01-20T23:56:14.038002479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fmt6r,Uid:867a1dc3-f4d9-4cba-a9b8-47adcf051929,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bb86e6c7f8c976b01502c9cca19f632e8b739c62cdbaedd55f5a768f2a40e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.038927 kubelet[2824]: E0120 23:56:14.038504 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bb86e6c7f8c976b01502c9cca19f632e8b739c62cdbaedd55f5a768f2a40e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.038927 kubelet[2824]: E0120 23:56:14.038559 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bb86e6c7f8c976b01502c9cca19f632e8b739c62cdbaedd55f5a768f2a40e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" Jan 20 23:56:14.038927 kubelet[2824]: E0120 23:56:14.038584 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11bb86e6c7f8c976b01502c9cca19f632e8b739c62cdbaedd55f5a768f2a40e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" Jan 20 23:56:14.040200 kubelet[2824]: E0120 23:56:14.038636 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11bb86e6c7f8c976b01502c9cca19f632e8b739c62cdbaedd55f5a768f2a40e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:56:14.048067 containerd[1592]: time="2026-01-20T23:56:14.047994739Z" level=error msg="Failed to destroy network for sandbox \"eb413310282c46c5f85e159c318dce2efce605605656086e9a9bde9e0dc3d51e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.051281 containerd[1592]: time="2026-01-20T23:56:14.051219759Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fhgzx,Uid:3f49a160-e207-435e-86a4-138a2a624ffb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb413310282c46c5f85e159c318dce2efce605605656086e9a9bde9e0dc3d51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.051578 kubelet[2824]: E0120 23:56:14.051482 2824 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb413310282c46c5f85e159c318dce2efce605605656086e9a9bde9e0dc3d51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:56:14.051578 kubelet[2824]: E0120 23:56:14.051539 2824 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb413310282c46c5f85e159c318dce2efce605605656086e9a9bde9e0dc3d51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" Jan 20 23:56:14.051578 kubelet[2824]: E0120 23:56:14.051563 2824 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb413310282c46c5f85e159c318dce2efce605605656086e9a9bde9e0dc3d51e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" Jan 20 23:56:14.051833 kubelet[2824]: E0120 23:56:14.051657 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb413310282c46c5f85e159c318dce2efce605605656086e9a9bde9e0dc3d51e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:56:14.250458 systemd[1]: run-netns-cni\x2d1a6bbce7\x2d98f4\x2d2a3f\x2dc260\x2de18df3140948.mount: Deactivated successfully. Jan 20 23:56:17.103010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233019511.mount: Deactivated successfully. Jan 20 23:56:17.135837 containerd[1592]: time="2026-01-20T23:56:17.135765110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 20 23:56:17.138960 containerd[1592]: time="2026-01-20T23:56:17.138900326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.516475902s" Jan 20 23:56:17.138960 containerd[1592]: time="2026-01-20T23:56:17.138948046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 23:56:17.159029 containerd[1592]: time="2026-01-20T23:56:17.158476743Z" level=info msg="CreateContainer within sandbox \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 23:56:17.177318 containerd[1592]: time="2026-01-20T23:56:17.177272397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:17.178221 containerd[1592]: time="2026-01-20T23:56:17.178188522Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:17.178998 containerd[1592]: time="2026-01-20T23:56:17.178968526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:56:17.198996 containerd[1592]: time="2026-01-20T23:56:17.198908345Z" level=info msg="Container e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:17.213291 containerd[1592]: time="2026-01-20T23:56:17.213144896Z" level=info msg="CreateContainer within sandbox \"000cf7f0bf9d5d6694f677df82bfb8c53417e47ec1fd7c9bed37d3bd02d36509\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e\"" Jan 20 23:56:17.214743 containerd[1592]: time="2026-01-20T23:56:17.214714184Z" level=info msg="StartContainer for \"e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e\"" Jan 20 23:56:17.217807 containerd[1592]: time="2026-01-20T23:56:17.217756879Z" level=info msg="connecting to shim e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e" address="unix:///run/containerd/s/2ade32952f235bdce045f4185e85c89938852422b0cbfcd52f2bba74c3d8d21c" protocol=ttrpc version=3 Jan 20 23:56:17.279343 systemd[1]: Started cri-containerd-e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e.scope - libcontainer container e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e. Jan 20 23:56:17.343510 kernel: audit: type=1334 audit(1768953377.339:567): prog-id=170 op=LOAD Jan 20 23:56:17.343685 kernel: audit: type=1300 audit(1768953377.339:567): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.339000 audit: BPF prog-id=170 op=LOAD Jan 20 23:56:17.339000 audit[3804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.339000 audit: BPF prog-id=171 op=LOAD Jan 20 23:56:17.339000 audit[3804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.349111 kernel: audit: type=1327 audit(1768953377.339:567): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.349199 kernel: audit: type=1334 audit(1768953377.339:568): prog-id=171 op=LOAD Jan 20 23:56:17.349223 kernel: audit: type=1300 audit(1768953377.339:568): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.351602 kernel: audit: type=1327 audit(1768953377.339:568): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.340000 audit: BPF prog-id=171 op=UNLOAD Jan 20 23:56:17.355406 kernel: audit: type=1334 audit(1768953377.340:569): prog-id=171 op=UNLOAD Jan 20 23:56:17.355521 kernel: audit: type=1300 audit(1768953377.340:569): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.355565 kernel: audit: type=1327 audit(1768953377.340:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.340000 audit[3804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.358788 kernel: audit: type=1334 audit(1768953377.340:570): prog-id=170 op=UNLOAD Jan 20 23:56:17.340000 audit: BPF prog-id=170 op=UNLOAD Jan 20 23:56:17.340000 audit[3804]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.340000 audit: BPF prog-id=172 op=LOAD Jan 20 23:56:17.340000 audit[3804]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3328 pid=3804 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:17.340000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531616338303237636434633163386338653635643462656362366631 Jan 20 23:56:17.380160 containerd[1592]: time="2026-01-20T23:56:17.380092490Z" level=info msg="StartContainer for \"e1ac8027cd4c1c8c8e65d4becb6f188a1ce0260bb4046a41b25d02e200636c2e\" returns successfully" Jan 20 23:56:17.517164 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 23:56:17.517390 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 23:56:17.668167 kubelet[2824]: I0120 23:56:17.667908 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nwdhk" podStartSLOduration=1.51169955 podStartE2EDuration="13.667887246s" podCreationTimestamp="2026-01-20 23:56:04 +0000 UTC" firstStartedPulling="2026-01-20 23:56:04.983474073 +0000 UTC m=+29.653227488" lastFinishedPulling="2026-01-20 23:56:17.139661769 +0000 UTC m=+41.809415184" observedRunningTime="2026-01-20 23:56:17.665077192 +0000 UTC m=+42.334830647" watchObservedRunningTime="2026-01-20 23:56:17.667887246 +0000 UTC m=+42.337640661" Jan 20 23:56:17.765489 kubelet[2824]: I0120 23:56:17.764199 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-backend-key-pair\") pod \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\" (UID: \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\") " Jan 20 23:56:17.765705 kubelet[2824]: I0120 23:56:17.765685 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp6pl\" (UniqueName: \"kubernetes.io/projected/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-kube-api-access-hp6pl\") pod \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\" (UID: \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\") " Jan 20 23:56:17.765847 kubelet[2824]: I0120 23:56:17.765821 2824 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-ca-bundle\") pod \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\" (UID: \"c7a5ed03-84ad-40f0-b474-3e36847dc1ab\") " Jan 20 23:56:17.770415 kubelet[2824]: I0120 23:56:17.770288 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c7a5ed03-84ad-40f0-b474-3e36847dc1ab" (UID: "c7a5ed03-84ad-40f0-b474-3e36847dc1ab"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 23:56:17.773059 kubelet[2824]: I0120 23:56:17.772761 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c7a5ed03-84ad-40f0-b474-3e36847dc1ab" (UID: "c7a5ed03-84ad-40f0-b474-3e36847dc1ab"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 23:56:17.775532 kubelet[2824]: I0120 23:56:17.775381 2824 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-kube-api-access-hp6pl" (OuterVolumeSpecName: "kube-api-access-hp6pl") pod "c7a5ed03-84ad-40f0-b474-3e36847dc1ab" (UID: "c7a5ed03-84ad-40f0-b474-3e36847dc1ab"). InnerVolumeSpecName "kube-api-access-hp6pl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 23:56:17.867575 kubelet[2824]: I0120 23:56:17.866725 2824 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hp6pl\" (UniqueName: \"kubernetes.io/projected/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-kube-api-access-hp6pl\") on node \"ci-4547-0-0-n-f640cc67e1\" DevicePath \"\"" Jan 20 23:56:17.867847 kubelet[2824]: I0120 23:56:17.867788 2824 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-ca-bundle\") on node \"ci-4547-0-0-n-f640cc67e1\" DevicePath \"\"" Jan 20 23:56:17.867847 kubelet[2824]: I0120 23:56:17.867816 2824 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7a5ed03-84ad-40f0-b474-3e36847dc1ab-whisker-backend-key-pair\") on node \"ci-4547-0-0-n-f640cc67e1\" DevicePath \"\"" Jan 20 23:56:18.104278 systemd[1]: var-lib-kubelet-pods-c7a5ed03\x2d84ad\x2d40f0\x2db474\x2d3e36847dc1ab-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 23:56:18.104385 systemd[1]: var-lib-kubelet-pods-c7a5ed03\x2d84ad\x2d40f0\x2db474\x2d3e36847dc1ab-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhp6pl.mount: Deactivated successfully. Jan 20 23:56:18.650985 systemd[1]: Removed slice kubepods-besteffort-podc7a5ed03_84ad_40f0_b474_3e36847dc1ab.slice - libcontainer container kubepods-besteffort-podc7a5ed03_84ad_40f0_b474_3e36847dc1ab.slice. Jan 20 23:56:18.734070 systemd[1]: Created slice kubepods-besteffort-pod511e82cf_6210_4f23_b7d3_73c990aafdbb.slice - libcontainer container kubepods-besteffort-pod511e82cf_6210_4f23_b7d3_73c990aafdbb.slice. Jan 20 23:56:18.773787 kubelet[2824]: I0120 23:56:18.773743 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/511e82cf-6210-4f23-b7d3-73c990aafdbb-whisker-backend-key-pair\") pod \"whisker-9ccb8665d-km95p\" (UID: \"511e82cf-6210-4f23-b7d3-73c990aafdbb\") " pod="calico-system/whisker-9ccb8665d-km95p" Jan 20 23:56:18.774239 kubelet[2824]: I0120 23:56:18.774219 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/511e82cf-6210-4f23-b7d3-73c990aafdbb-whisker-ca-bundle\") pod \"whisker-9ccb8665d-km95p\" (UID: \"511e82cf-6210-4f23-b7d3-73c990aafdbb\") " pod="calico-system/whisker-9ccb8665d-km95p" Jan 20 23:56:18.774972 kubelet[2824]: I0120 23:56:18.774369 2824 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljss6\" (UniqueName: \"kubernetes.io/projected/511e82cf-6210-4f23-b7d3-73c990aafdbb-kube-api-access-ljss6\") pod \"whisker-9ccb8665d-km95p\" (UID: \"511e82cf-6210-4f23-b7d3-73c990aafdbb\") " pod="calico-system/whisker-9ccb8665d-km95p" Jan 20 23:56:19.038537 containerd[1592]: time="2026-01-20T23:56:19.038489793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9ccb8665d-km95p,Uid:511e82cf-6210-4f23-b7d3-73c990aafdbb,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:19.288793 systemd-networkd[1481]: cali0cabd920056: Link UP Jan 20 23:56:19.291501 systemd-networkd[1481]: cali0cabd920056: Gained carrier Jan 20 23:56:19.324068 containerd[1592]: 2026-01-20 23:56:19.074 [INFO][4029] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:56:19.324068 containerd[1592]: 2026-01-20 23:56:19.146 [INFO][4029] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0 whisker-9ccb8665d- calico-system 511e82cf-6210-4f23-b7d3-73c990aafdbb 869 0 2026-01-20 23:56:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9ccb8665d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 whisker-9ccb8665d-km95p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0cabd920056 [] [] }} ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-" Jan 20 23:56:19.324068 containerd[1592]: 2026-01-20 23:56:19.146 [INFO][4029] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.324068 containerd[1592]: 2026-01-20 23:56:19.218 [INFO][4042] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" HandleID="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Workload="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.218 [INFO][4042] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" HandleID="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Workload="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a410), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"whisker-9ccb8665d-km95p", "timestamp":"2026-01-20 23:56:19.218125781 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.218 [INFO][4042] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.218 [INFO][4042] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.218 [INFO][4042] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.231 [INFO][4042] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.237 [INFO][4042] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.243 [INFO][4042] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.246 [INFO][4042] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324340 containerd[1592]: 2026-01-20 23:56:19.248 [INFO][4042] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.249 [INFO][4042] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.256 [INFO][4042] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79 Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.262 [INFO][4042] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.269 [INFO][4042] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.193/26] block=192.168.112.192/26 handle="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.269 [INFO][4042] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.193/26] handle="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.270 [INFO][4042] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:19.324534 containerd[1592]: 2026-01-20 23:56:19.270 [INFO][4042] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.193/26] IPv6=[] ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" HandleID="k8s-pod-network.3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Workload="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.324673 containerd[1592]: 2026-01-20 23:56:19.273 [INFO][4029] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0", GenerateName:"whisker-9ccb8665d-", Namespace:"calico-system", SelfLink:"", UID:"511e82cf-6210-4f23-b7d3-73c990aafdbb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9ccb8665d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"whisker-9ccb8665d-km95p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0cabd920056", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:19.324673 containerd[1592]: 2026-01-20 23:56:19.273 [INFO][4029] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.193/32] ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.324745 containerd[1592]: 2026-01-20 23:56:19.273 [INFO][4029] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0cabd920056 ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.324745 containerd[1592]: 2026-01-20 23:56:19.296 [INFO][4029] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.324788 containerd[1592]: 2026-01-20 23:56:19.297 [INFO][4029] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0", GenerateName:"whisker-9ccb8665d-", Namespace:"calico-system", SelfLink:"", UID:"511e82cf-6210-4f23-b7d3-73c990aafdbb", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9ccb8665d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79", Pod:"whisker-9ccb8665d-km95p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0cabd920056", MAC:"06:b6:17:aa:73:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:19.324840 containerd[1592]: 2026-01-20 23:56:19.318 [INFO][4029] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" Namespace="calico-system" Pod="whisker-9ccb8665d-km95p" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-whisker--9ccb8665d--km95p-eth0" Jan 20 23:56:19.412970 containerd[1592]: time="2026-01-20T23:56:19.412875156Z" level=info msg="connecting to shim 3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79" address="unix:///run/containerd/s/819bfcb7cf413f43cd4f45a2ad3b0ee6a14c8a81170ecfa9ce63d3213751bf2e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:19.454370 systemd[1]: Started cri-containerd-3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79.scope - libcontainer container 3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79. Jan 20 23:56:19.457816 kubelet[2824]: I0120 23:56:19.457624 2824 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a5ed03-84ad-40f0-b474-3e36847dc1ab" path="/var/lib/kubelet/pods/c7a5ed03-84ad-40f0-b474-3e36847dc1ab/volumes" Jan 20 23:56:19.470000 audit: BPF prog-id=173 op=LOAD Jan 20 23:56:19.470000 audit: BPF prog-id=174 op=LOAD Jan 20 23:56:19.470000 audit[4083]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.470000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.471000 audit: BPF prog-id=174 op=UNLOAD Jan 20 23:56:19.471000 audit[4083]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.471000 audit: BPF prog-id=175 op=LOAD Jan 20 23:56:19.471000 audit[4083]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.471000 audit: BPF prog-id=176 op=LOAD Jan 20 23:56:19.471000 audit[4083]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.471000 audit: BPF prog-id=176 op=UNLOAD Jan 20 23:56:19.471000 audit[4083]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.471000 audit: BPF prog-id=175 op=UNLOAD Jan 20 23:56:19.471000 audit[4083]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.471000 audit: BPF prog-id=177 op=LOAD Jan 20 23:56:19.471000 audit[4083]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4072 pid=4083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:19.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365353835333638316264613235326331616636323338636365393539 Jan 20 23:56:19.511396 containerd[1592]: time="2026-01-20T23:56:19.511346788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9ccb8665d-km95p,Uid:511e82cf-6210-4f23-b7d3-73c990aafdbb,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e5853681bda252c1af6238cce959b44796d8090c1434944e17b364af1d2dc79\"" Jan 20 23:56:19.523177 containerd[1592]: time="2026-01-20T23:56:19.523124720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:56:19.858366 containerd[1592]: time="2026-01-20T23:56:19.858273030Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:19.860083 containerd[1592]: time="2026-01-20T23:56:19.859976878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:56:19.860188 containerd[1592]: time="2026-01-20T23:56:19.860131558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:19.863118 kubelet[2824]: E0120 23:56:19.863032 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:56:19.864147 kubelet[2824]: E0120 23:56:19.863134 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:56:19.873057 kubelet[2824]: E0120 23:56:19.872991 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad3fd4707e1b4e4592bbc8d326d656a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:19.875660 containerd[1592]: time="2026-01-20T23:56:19.875526106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:56:20.210685 containerd[1592]: time="2026-01-20T23:56:20.210596798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:20.212278 containerd[1592]: time="2026-01-20T23:56:20.212190845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:56:20.212378 containerd[1592]: time="2026-01-20T23:56:20.212300605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:20.212670 kubelet[2824]: E0120 23:56:20.212626 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:56:20.212791 kubelet[2824]: E0120 23:56:20.212698 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:56:20.212934 kubelet[2824]: E0120 23:56:20.212865 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:20.214398 kubelet[2824]: E0120 23:56:20.214345 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:56:20.654658 kubelet[2824]: E0120 23:56:20.654454 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:56:20.664461 systemd-networkd[1481]: cali0cabd920056: Gained IPv6LL Jan 20 23:56:20.665875 kubelet[2824]: I0120 23:56:20.665439 2824 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 23:56:20.690000 audit[4155]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4155 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:20.690000 audit[4155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff3c74f90 a2=0 a3=1 items=0 ppid=2925 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:20.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:20.701000 audit[4155]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4155 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:20.701000 audit[4155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff3c74f90 a2=0 a3=1 items=0 ppid=2925 pid=4155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:20.701000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:20.713000 audit[4157]: NETFILTER_CFG table=filter:123 family=2 entries=21 op=nft_register_rule pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:20.713000 audit[4157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffee88dd10 a2=0 a3=1 items=0 ppid=2925 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:20.713000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:20.718000 audit[4157]: NETFILTER_CFG table=nat:124 family=2 entries=19 op=nft_register_chain pid=4157 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:20.718000 audit[4157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffee88dd10 a2=0 a3=1 items=0 ppid=2925 pid=4157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:20.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:21.409000 audit: BPF prog-id=178 op=LOAD Jan 20 23:56:21.409000 audit[4182]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdce05868 a2=98 a3=ffffdce05858 items=0 ppid=4160 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.409000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:56:21.409000 audit: BPF prog-id=178 op=UNLOAD Jan 20 23:56:21.409000 audit[4182]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdce05838 a3=0 items=0 ppid=4160 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.409000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:56:21.409000 audit: BPF prog-id=179 op=LOAD Jan 20 23:56:21.409000 audit[4182]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdce05718 a2=74 a3=95 items=0 ppid=4160 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.409000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:56:21.409000 audit: BPF prog-id=179 op=UNLOAD Jan 20 23:56:21.409000 audit[4182]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4160 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.409000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:56:21.409000 audit: BPF prog-id=180 op=LOAD Jan 20 23:56:21.409000 audit[4182]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdce05748 a2=40 a3=ffffdce05778 items=0 ppid=4160 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.409000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:56:21.409000 audit: BPF prog-id=180 op=UNLOAD Jan 20 23:56:21.409000 audit[4182]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdce05778 items=0 ppid=4160 pid=4182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.409000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:56:21.413000 audit: BPF prog-id=181 op=LOAD Jan 20 23:56:21.413000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff38545a8 a2=98 a3=fffff3854598 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.413000 audit: BPF prog-id=181 op=UNLOAD Jan 20 23:56:21.413000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff3854578 a3=0 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.413000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.414000 audit: BPF prog-id=182 op=LOAD Jan 20 23:56:21.414000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3854238 a2=74 a3=95 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.414000 audit: BPF prog-id=182 op=UNLOAD Jan 20 23:56:21.414000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.414000 audit: BPF prog-id=183 op=LOAD Jan 20 23:56:21.414000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3854298 a2=94 a3=2 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.414000 audit: BPF prog-id=183 op=UNLOAD Jan 20 23:56:21.414000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.414000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.739000 audit: BPF prog-id=184 op=LOAD Jan 20 23:56:21.739000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffff3854258 a2=40 a3=fffff3854288 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.739000 audit: BPF prog-id=184 op=UNLOAD Jan 20 23:56:21.739000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffff3854288 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.739000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.749000 audit: BPF prog-id=185 op=LOAD Jan 20 23:56:21.749000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3854268 a2=94 a3=4 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.749000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.749000 audit: BPF prog-id=185 op=UNLOAD Jan 20 23:56:21.749000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.749000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.750000 audit: BPF prog-id=186 op=LOAD Jan 20 23:56:21.750000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffff38540a8 a2=94 a3=5 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.750000 audit: BPF prog-id=186 op=UNLOAD Jan 20 23:56:21.750000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.750000 audit: BPF prog-id=187 op=LOAD Jan 20 23:56:21.750000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff38542d8 a2=94 a3=6 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.750000 audit: BPF prog-id=187 op=UNLOAD Jan 20 23:56:21.750000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.750000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.751000 audit: BPF prog-id=188 op=LOAD Jan 20 23:56:21.751000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffff3853aa8 a2=94 a3=83 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.751000 audit: BPF prog-id=189 op=LOAD Jan 20 23:56:21.751000 audit[4183]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffff3853868 a2=94 a3=2 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.751000 audit: BPF prog-id=189 op=UNLOAD Jan 20 23:56:21.751000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.751000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.752000 audit: BPF prog-id=188 op=UNLOAD Jan 20 23:56:21.752000 audit[4183]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=185fd620 a3=185f0b00 items=0 ppid=4160 pid=4183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.752000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:56:21.766000 audit: BPF prog-id=190 op=LOAD Jan 20 23:56:21.766000 audit[4216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0ec6178 a2=98 a3=fffff0ec6168 items=0 ppid=4160 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.766000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:56:21.766000 audit: BPF prog-id=190 op=UNLOAD Jan 20 23:56:21.766000 audit[4216]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff0ec6148 a3=0 items=0 ppid=4160 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.766000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:56:21.766000 audit: BPF prog-id=191 op=LOAD Jan 20 23:56:21.766000 audit[4216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0ec6028 a2=74 a3=95 items=0 ppid=4160 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.766000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:56:21.766000 audit: BPF prog-id=191 op=UNLOAD Jan 20 23:56:21.766000 audit[4216]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4160 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.766000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:56:21.766000 audit: BPF prog-id=192 op=LOAD Jan 20 23:56:21.766000 audit[4216]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff0ec6058 a2=40 a3=fffff0ec6088 items=0 ppid=4160 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.766000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:56:21.766000 audit: BPF prog-id=192 op=UNLOAD Jan 20 23:56:21.766000 audit[4216]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff0ec6088 items=0 ppid=4160 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.766000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:56:21.849099 systemd-networkd[1481]: vxlan.calico: Link UP Jan 20 23:56:21.849116 systemd-networkd[1481]: vxlan.calico: Gained carrier Jan 20 23:56:21.885000 audit: BPF prog-id=193 op=LOAD Jan 20 23:56:21.885000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea930c88 a2=98 a3=ffffea930c78 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.885000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.885000 audit: BPF prog-id=193 op=UNLOAD Jan 20 23:56:21.885000 audit[4240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffea930c58 a3=0 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.885000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=194 op=LOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea930968 a2=74 a3=95 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=194 op=UNLOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=195 op=LOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffea9309c8 a2=94 a3=2 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=195 op=UNLOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=196 op=LOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea930848 a2=40 a3=ffffea930878 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=196 op=UNLOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffea930878 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.886000 audit: BPF prog-id=197 op=LOAD Jan 20 23:56:21.886000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea930998 a2=94 a3=b7 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.886000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.887000 audit: BPF prog-id=197 op=UNLOAD Jan 20 23:56:21.887000 audit[4240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.887000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.888000 audit: BPF prog-id=198 op=LOAD Jan 20 23:56:21.888000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea930048 a2=94 a3=2 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.888000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.888000 audit: BPF prog-id=198 op=UNLOAD Jan 20 23:56:21.888000 audit[4240]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.888000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.888000 audit: BPF prog-id=199 op=LOAD Jan 20 23:56:21.888000 audit[4240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffea9301d8 a2=94 a3=30 items=0 ppid=4160 pid=4240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.888000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:56:21.893000 audit: BPF prog-id=200 op=LOAD Jan 20 23:56:21.893000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffc6f6878 a2=98 a3=fffffc6f6868 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:21.893000 audit: BPF prog-id=200 op=UNLOAD Jan 20 23:56:21.893000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffc6f6848 a3=0 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:21.893000 audit: BPF prog-id=201 op=LOAD Jan 20 23:56:21.893000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc6f6508 a2=74 a3=95 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:21.893000 audit: BPF prog-id=201 op=UNLOAD Jan 20 23:56:21.893000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:21.893000 audit: BPF prog-id=202 op=LOAD Jan 20 23:56:21.893000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc6f6568 a2=94 a3=2 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:21.893000 audit: BPF prog-id=202 op=UNLOAD Jan 20 23:56:21.893000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:21.893000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.012000 audit: BPF prog-id=203 op=LOAD Jan 20 23:56:22.012000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=fffffc6f6528 a2=40 a3=fffffc6f6558 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.012000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.012000 audit: BPF prog-id=203 op=UNLOAD Jan 20 23:56:22.012000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=fffffc6f6558 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.012000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.024000 audit: BPF prog-id=204 op=LOAD Jan 20 23:56:22.024000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffc6f6538 a2=94 a3=4 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.024000 audit: BPF prog-id=204 op=UNLOAD Jan 20 23:56:22.024000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.024000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.025000 audit: BPF prog-id=205 op=LOAD Jan 20 23:56:22.025000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=fffffc6f6378 a2=94 a3=5 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.025000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.025000 audit: BPF prog-id=205 op=UNLOAD Jan 20 23:56:22.025000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.025000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.025000 audit: BPF prog-id=206 op=LOAD Jan 20 23:56:22.025000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffc6f65a8 a2=94 a3=6 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.025000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.026000 audit: BPF prog-id=206 op=UNLOAD Jan 20 23:56:22.026000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.026000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.026000 audit: BPF prog-id=207 op=LOAD Jan 20 23:56:22.026000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=fffffc6f5d78 a2=94 a3=83 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.026000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.026000 audit: BPF prog-id=208 op=LOAD Jan 20 23:56:22.026000 audit[4244]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=fffffc6f5b38 a2=94 a3=2 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.026000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.027000 audit: BPF prog-id=208 op=UNLOAD Jan 20 23:56:22.027000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.027000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.027000 audit: BPF prog-id=207 op=UNLOAD Jan 20 23:56:22.027000 audit[4244]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=5c70620 a3=5c63b00 items=0 ppid=4160 pid=4244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.027000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:56:22.033000 audit: BPF prog-id=199 op=UNLOAD Jan 20 23:56:22.033000 audit[4160]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=400095b480 a2=0 a3=0 items=0 ppid=3943 pid=4160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.033000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 23:56:22.093000 audit[4272]: NETFILTER_CFG table=nat:125 family=2 entries=15 op=nft_register_chain pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:22.093000 audit[4272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffce23eb40 a2=0 a3=ffff884a8fa8 items=0 ppid=4160 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.093000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:22.096000 audit[4275]: NETFILTER_CFG table=mangle:126 family=2 entries=16 op=nft_register_chain pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:22.096000 audit[4275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffffe835270 a2=0 a3=ffff8887bfa8 items=0 ppid=4160 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.096000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:22.102000 audit[4271]: NETFILTER_CFG table=raw:127 family=2 entries=21 op=nft_register_chain pid=4271 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:22.102000 audit[4271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcd6a9bb0 a2=0 a3=ffff8fcfcfa8 items=0 ppid=4160 pid=4271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.102000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:22.113000 audit[4277]: NETFILTER_CFG table=filter:128 family=2 entries=94 op=nft_register_chain pid=4277 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:22.113000 audit[4277]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffd5a01690 a2=0 a3=ffffa32f8fa8 items=0 ppid=4160 pid=4277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:22.113000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:23.031978 systemd-networkd[1481]: vxlan.calico: Gained IPv6LL Jan 20 23:56:23.455600 containerd[1592]: time="2026-01-20T23:56:23.455284620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wkbbm,Uid:0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:23.601731 systemd-networkd[1481]: cali060abdce8a2: Link UP Jan 20 23:56:23.602997 systemd-networkd[1481]: cali060abdce8a2: Gained carrier Jan 20 23:56:23.625630 containerd[1592]: 2026-01-20 23:56:23.517 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0 goldmane-666569f655- calico-system 0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0 798 0 2026-01-20 23:56:00 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 goldmane-666569f655-wkbbm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali060abdce8a2 [] [] }} ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-" Jan 20 23:56:23.625630 containerd[1592]: 2026-01-20 23:56:23.517 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.625630 containerd[1592]: 2026-01-20 23:56:23.548 [INFO][4298] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" HandleID="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Workload="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.548 [INFO][4298] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" HandleID="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Workload="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"goldmane-666569f655-wkbbm", "timestamp":"2026-01-20 23:56:23.548813777 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.549 [INFO][4298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.549 [INFO][4298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.549 [INFO][4298] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.561 [INFO][4298] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.568 [INFO][4298] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.574 [INFO][4298] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.576 [INFO][4298] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.625870 containerd[1592]: 2026-01-20 23:56:23.579 [INFO][4298] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.579 [INFO][4298] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.581 [INFO][4298] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36 Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.587 [INFO][4298] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.594 [INFO][4298] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.194/26] block=192.168.112.192/26 handle="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.595 [INFO][4298] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.194/26] handle="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.595 [INFO][4298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:23.626176 containerd[1592]: 2026-01-20 23:56:23.595 [INFO][4298] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.194/26] IPv6=[] ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" HandleID="k8s-pod-network.db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Workload="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.626382 containerd[1592]: 2026-01-20 23:56:23.598 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"goldmane-666569f655-wkbbm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali060abdce8a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:23.626438 containerd[1592]: 2026-01-20 23:56:23.598 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.194/32] ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.626438 containerd[1592]: 2026-01-20 23:56:23.598 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali060abdce8a2 ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.626438 containerd[1592]: 2026-01-20 23:56:23.602 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.626492 containerd[1592]: 2026-01-20 23:56:23.603 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36", Pod:"goldmane-666569f655-wkbbm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali060abdce8a2", MAC:"ea:d1:79:ad:57:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:23.626545 containerd[1592]: 2026-01-20 23:56:23.616 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" Namespace="calico-system" Pod="goldmane-666569f655-wkbbm" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-goldmane--666569f655--wkbbm-eth0" Jan 20 23:56:23.652000 audit[4314]: NETFILTER_CFG table=filter:129 family=2 entries=44 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:23.655193 kernel: kauditd_printk_skb: 237 callbacks suppressed Jan 20 23:56:23.655311 kernel: audit: type=1325 audit(1768953383.652:650): table=filter:129 family=2 entries=44 op=nft_register_chain pid=4314 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:23.652000 audit[4314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffda7677b0 a2=0 a3=ffff8473cfa8 items=0 ppid=4160 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.658127 kernel: audit: type=1300 audit(1768953383.652:650): arch=c00000b7 syscall=211 success=yes exit=25180 a0=3 a1=ffffda7677b0 a2=0 a3=ffff8473cfa8 items=0 ppid=4160 pid=4314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.652000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:23.658490 containerd[1592]: time="2026-01-20T23:56:23.658455829Z" level=info msg="connecting to shim db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36" address="unix:///run/containerd/s/5323eed88e885c0f675766e18ee7128413840da28f683587755f9db355cb9e0b" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:23.659678 kernel: audit: type=1327 audit(1768953383.652:650): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:23.691288 systemd[1]: Started cri-containerd-db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36.scope - libcontainer container db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36. Jan 20 23:56:23.703000 audit: BPF prog-id=209 op=LOAD Jan 20 23:56:23.704000 audit: BPF prog-id=210 op=LOAD Jan 20 23:56:23.706600 kernel: audit: type=1334 audit(1768953383.703:651): prog-id=209 op=LOAD Jan 20 23:56:23.706673 kernel: audit: type=1334 audit(1768953383.704:652): prog-id=210 op=LOAD Jan 20 23:56:23.704000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.709769 kernel: audit: type=1300 audit(1768953383.704:652): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.709839 kernel: audit: type=1327 audit(1768953383.704:652): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.704000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.705000 audit: BPF prog-id=210 op=UNLOAD Jan 20 23:56:23.712523 kernel: audit: type=1334 audit(1768953383.705:653): prog-id=210 op=UNLOAD Jan 20 23:56:23.712613 kernel: audit: type=1300 audit(1768953383.705:653): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.705000 audit[4334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.717105 kernel: audit: type=1327 audit(1768953383.705:653): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.705000 audit: BPF prog-id=211 op=LOAD Jan 20 23:56:23.705000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.705000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.706000 audit: BPF prog-id=212 op=LOAD Jan 20 23:56:23.706000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.706000 audit: BPF prog-id=212 op=UNLOAD Jan 20 23:56:23.706000 audit[4334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.706000 audit: BPF prog-id=211 op=UNLOAD Jan 20 23:56:23.706000 audit[4334]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.706000 audit: BPF prog-id=213 op=LOAD Jan 20 23:56:23.706000 audit[4334]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4323 pid=4334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:23.706000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462393030343138323165613366386466353266326264613465663833 Jan 20 23:56:23.745565 containerd[1592]: time="2026-01-20T23:56:23.745500404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wkbbm,Uid:0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"db90041821ea3f8df52f2bda4ef83cba4d06af7bcdf736c9ae60dc0497537e36\"" Jan 20 23:56:23.747571 containerd[1592]: time="2026-01-20T23:56:23.747460171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:56:24.081170 containerd[1592]: time="2026-01-20T23:56:24.081111164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:24.082629 containerd[1592]: time="2026-01-20T23:56:24.082510569Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:56:24.082629 containerd[1592]: time="2026-01-20T23:56:24.082566009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:24.082820 kubelet[2824]: E0120 23:56:24.082774 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:56:24.083226 kubelet[2824]: E0120 23:56:24.082827 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:56:24.083226 kubelet[2824]: E0120 23:56:24.083011 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qdtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:24.084314 kubelet[2824]: E0120 23:56:24.084211 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:56:24.455393 containerd[1592]: time="2026-01-20T23:56:24.455001232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5sc47,Uid:9b72cdbf-b6bd-45ae-98ac-50d5aed18456,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:24.606068 systemd-networkd[1481]: cali52706933fe1: Link UP Jan 20 23:56:24.606779 systemd-networkd[1481]: cali52706933fe1: Gained carrier Jan 20 23:56:24.627163 containerd[1592]: 2026-01-20 23:56:24.513 [INFO][4360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0 csi-node-driver- calico-system 9b72cdbf-b6bd-45ae-98ac-50d5aed18456 705 0 2026-01-20 23:56:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 csi-node-driver-5sc47 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali52706933fe1 [] [] }} ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-" Jan 20 23:56:24.627163 containerd[1592]: 2026-01-20 23:56:24.513 [INFO][4360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.627163 containerd[1592]: 2026-01-20 23:56:24.543 [INFO][4373] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" HandleID="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Workload="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.543 [INFO][4373] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" HandleID="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Workload="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"csi-node-driver-5sc47", "timestamp":"2026-01-20 23:56:24.543078792 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.543 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.543 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.543 [INFO][4373] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.556 [INFO][4373] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.563 [INFO][4373] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.569 [INFO][4373] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.572 [INFO][4373] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627632 containerd[1592]: 2026-01-20 23:56:24.575 [INFO][4373] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.575 [INFO][4373] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.577 [INFO][4373] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.584 [INFO][4373] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.594 [INFO][4373] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.195/26] block=192.168.112.192/26 handle="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.594 [INFO][4373] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.195/26] handle="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.594 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:24.627907 containerd[1592]: 2026-01-20 23:56:24.594 [INFO][4373] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.195/26] IPv6=[] ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" HandleID="k8s-pod-network.4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Workload="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.628123 containerd[1592]: 2026-01-20 23:56:24.598 [INFO][4360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b72cdbf-b6bd-45ae-98ac-50d5aed18456", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"csi-node-driver-5sc47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52706933fe1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:24.628185 containerd[1592]: 2026-01-20 23:56:24.599 [INFO][4360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.195/32] ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.628185 containerd[1592]: 2026-01-20 23:56:24.599 [INFO][4360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali52706933fe1 ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.628185 containerd[1592]: 2026-01-20 23:56:24.602 [INFO][4360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.628283 containerd[1592]: 2026-01-20 23:56:24.605 [INFO][4360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9b72cdbf-b6bd-45ae-98ac-50d5aed18456", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc", Pod:"csi-node-driver-5sc47", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali52706933fe1", MAC:"7a:eb:62:6d:47:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:24.628347 containerd[1592]: 2026-01-20 23:56:24.622 [INFO][4360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" Namespace="calico-system" Pod="csi-node-driver-5sc47" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-csi--node--driver--5sc47-eth0" Jan 20 23:56:24.641000 audit[4388]: NETFILTER_CFG table=filter:130 family=2 entries=46 op=nft_register_chain pid=4388 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:24.641000 audit[4388]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23616 a0=3 a1=ffffc815b000 a2=0 a3=ffff8bfd6fa8 items=0 ppid=4160 pid=4388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.641000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:24.656928 containerd[1592]: time="2026-01-20T23:56:24.656754393Z" level=info msg="connecting to shim 4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc" address="unix:///run/containerd/s/0806b0e53d3bfe11c052bac5212227d1bfbc1493c03dd91c1de8fc6acb423293" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:24.670628 kubelet[2824]: E0120 23:56:24.670468 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:56:24.689466 systemd[1]: Started cri-containerd-4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc.scope - libcontainer container 4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc. Jan 20 23:56:24.706000 audit: BPF prog-id=214 op=LOAD Jan 20 23:56:24.707000 audit: BPF prog-id=215 op=LOAD Jan 20 23:56:24.707000 audit[4409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.707000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.708000 audit: BPF prog-id=215 op=UNLOAD Jan 20 23:56:24.708000 audit[4409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.708000 audit: BPF prog-id=216 op=LOAD Jan 20 23:56:24.708000 audit[4409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.708000 audit: BPF prog-id=217 op=LOAD Jan 20 23:56:24.708000 audit[4409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.708000 audit: BPF prog-id=217 op=UNLOAD Jan 20 23:56:24.708000 audit[4409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.708000 audit: BPF prog-id=216 op=UNLOAD Jan 20 23:56:24.708000 audit[4409]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.708000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.709000 audit: BPF prog-id=218 op=LOAD Jan 20 23:56:24.709000 audit[4409]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4397 pid=4409 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.709000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464633931303439623365376263303136366432336438633462306134 Jan 20 23:56:24.726000 audit[4433]: NETFILTER_CFG table=filter:131 family=2 entries=20 op=nft_register_rule pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:24.726000 audit[4433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd8555140 a2=0 a3=1 items=0 ppid=2925 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:24.732000 audit[4433]: NETFILTER_CFG table=nat:132 family=2 entries=14 op=nft_register_rule pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:24.732000 audit[4433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd8555140 a2=0 a3=1 items=0 ppid=2925 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:24.732000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:24.735832 containerd[1592]: time="2026-01-20T23:56:24.735790844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5sc47,Uid:9b72cdbf-b6bd-45ae-98ac-50d5aed18456,Namespace:calico-system,Attempt:0,} returns sandbox id \"4dc91049b3e7bc0166d23d8c4b0a45f0f3018391af7782f42e7281ad575a57bc\"" Jan 20 23:56:24.737735 containerd[1592]: time="2026-01-20T23:56:24.737698290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:56:25.065499 containerd[1592]: time="2026-01-20T23:56:25.065432639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:25.067738 containerd[1592]: time="2026-01-20T23:56:25.067657166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:56:25.067855 containerd[1592]: time="2026-01-20T23:56:25.067758566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:25.067967 kubelet[2824]: E0120 23:56:25.067919 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:56:25.068115 kubelet[2824]: E0120 23:56:25.067971 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:56:25.068588 kubelet[2824]: E0120 23:56:25.068136 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:25.071161 containerd[1592]: time="2026-01-20T23:56:25.070953535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:56:25.270252 systemd-networkd[1481]: cali060abdce8a2: Gained IPv6LL Jan 20 23:56:25.408004 containerd[1592]: time="2026-01-20T23:56:25.407841339Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:25.410793 containerd[1592]: time="2026-01-20T23:56:25.410143426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:56:25.410793 containerd[1592]: time="2026-01-20T23:56:25.410202146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:25.410963 kubelet[2824]: E0120 23:56:25.410418 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:56:25.410963 kubelet[2824]: E0120 23:56:25.410478 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:56:25.410963 kubelet[2824]: E0120 23:56:25.410649 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:25.412125 kubelet[2824]: E0120 23:56:25.412006 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:25.457723 containerd[1592]: time="2026-01-20T23:56:25.457079926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9nftp,Uid:81e80bc5-5484-4fcb-9332-a2cfbe9d8655,Namespace:kube-system,Attempt:0,}" Jan 20 23:56:25.608209 systemd-networkd[1481]: cali2b4cb438897: Link UP Jan 20 23:56:25.608912 systemd-networkd[1481]: cali2b4cb438897: Gained carrier Jan 20 23:56:25.626090 containerd[1592]: 2026-01-20 23:56:25.505 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0 coredns-668d6bf9bc- kube-system 81e80bc5-5484-4fcb-9332-a2cfbe9d8655 802 0 2026-01-20 23:55:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 coredns-668d6bf9bc-9nftp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2b4cb438897 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-" Jan 20 23:56:25.626090 containerd[1592]: 2026-01-20 23:56:25.505 [INFO][4441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.626090 containerd[1592]: 2026-01-20 23:56:25.542 [INFO][4452] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" HandleID="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Workload="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.547 [INFO][4452] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" HandleID="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Workload="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c8fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"coredns-668d6bf9bc-9nftp", "timestamp":"2026-01-20 23:56:25.542781581 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.547 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.547 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.547 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.560 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.566 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.571 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.574 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.626948 containerd[1592]: 2026-01-20 23:56:25.577 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.578 [INFO][4452] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.580 [INFO][4452] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165 Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.587 [INFO][4452] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.599 [INFO][4452] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.196/26] block=192.168.112.192/26 handle="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.599 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.196/26] handle="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.599 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:25.627485 containerd[1592]: 2026-01-20 23:56:25.599 [INFO][4452] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.196/26] IPv6=[] ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" HandleID="k8s-pod-network.0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Workload="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.628983 containerd[1592]: 2026-01-20 23:56:25.605 [INFO][4441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"81e80bc5-5484-4fcb-9332-a2cfbe9d8655", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"coredns-668d6bf9bc-9nftp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4cb438897", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:25.628983 containerd[1592]: 2026-01-20 23:56:25.605 [INFO][4441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.196/32] ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.628983 containerd[1592]: 2026-01-20 23:56:25.605 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2b4cb438897 ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.628983 containerd[1592]: 2026-01-20 23:56:25.609 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.628983 containerd[1592]: 2026-01-20 23:56:25.609 [INFO][4441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"81e80bc5-5484-4fcb-9332-a2cfbe9d8655", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165", Pod:"coredns-668d6bf9bc-9nftp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2b4cb438897", MAC:"de:57:5c:ba:e5:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:25.628983 containerd[1592]: 2026-01-20 23:56:25.623 [INFO][4441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" Namespace="kube-system" Pod="coredns-668d6bf9bc-9nftp" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--9nftp-eth0" Jan 20 23:56:25.656434 containerd[1592]: time="2026-01-20T23:56:25.656393759Z" level=info msg="connecting to shim 0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165" address="unix:///run/containerd/s/00ef05d0ca595bbdae5e0ede38a3c49d23088aef9f88845b9c44dc15ae2d115e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:25.662000 audit[4478]: NETFILTER_CFG table=filter:133 family=2 entries=46 op=nft_register_chain pid=4478 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:25.662000 audit[4478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23724 a0=3 a1=fffffeb3b780 a2=0 a3=ffff9d693fa8 items=0 ppid=4160 pid=4478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.662000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:25.681414 kubelet[2824]: E0120 23:56:25.681352 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:56:25.683831 kubelet[2824]: E0120 23:56:25.683685 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:25.706486 systemd[1]: Started cri-containerd-0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165.scope - libcontainer container 0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165. Jan 20 23:56:25.736000 audit: BPF prog-id=219 op=LOAD Jan 20 23:56:25.737000 audit: BPF prog-id=220 op=LOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.737000 audit: BPF prog-id=220 op=UNLOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.737000 audit: BPF prog-id=221 op=LOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.737000 audit: BPF prog-id=222 op=LOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.737000 audit: BPF prog-id=222 op=UNLOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.737000 audit: BPF prog-id=221 op=UNLOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.737000 audit: BPF prog-id=223 op=LOAD Jan 20 23:56:25.737000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4476 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.737000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3038353663626664346561653031306561653732653737633432343864 Jan 20 23:56:25.780986 containerd[1592]: time="2026-01-20T23:56:25.780853490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9nftp,Uid:81e80bc5-5484-4fcb-9332-a2cfbe9d8655,Namespace:kube-system,Attempt:0,} returns sandbox id \"0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165\"" Jan 20 23:56:25.785597 containerd[1592]: time="2026-01-20T23:56:25.785540904Z" level=info msg="CreateContainer within sandbox \"0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 23:56:25.801104 containerd[1592]: time="2026-01-20T23:56:25.800891230Z" level=info msg="Container 7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:25.813545 containerd[1592]: time="2026-01-20T23:56:25.813432427Z" level=info msg="CreateContainer within sandbox \"0856cbfd4eae010eae72e77c4248dd54fbc6d1b89f17aa9fce6004579155c165\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43\"" Jan 20 23:56:25.815479 containerd[1592]: time="2026-01-20T23:56:25.815403273Z" level=info msg="StartContainer for \"7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43\"" Jan 20 23:56:25.816904 containerd[1592]: time="2026-01-20T23:56:25.816874277Z" level=info msg="connecting to shim 7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43" address="unix:///run/containerd/s/00ef05d0ca595bbdae5e0ede38a3c49d23088aef9f88845b9c44dc15ae2d115e" protocol=ttrpc version=3 Jan 20 23:56:25.840091 systemd[1]: Started cri-containerd-7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43.scope - libcontainer container 7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43. Jan 20 23:56:25.859000 audit: BPF prog-id=224 op=LOAD Jan 20 23:56:25.860000 audit: BPF prog-id=225 op=LOAD Jan 20 23:56:25.860000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.860000 audit: BPF prog-id=225 op=UNLOAD Jan 20 23:56:25.860000 audit[4515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.860000 audit: BPF prog-id=226 op=LOAD Jan 20 23:56:25.860000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.860000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.861000 audit: BPF prog-id=227 op=LOAD Jan 20 23:56:25.861000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.861000 audit: BPF prog-id=227 op=UNLOAD Jan 20 23:56:25.861000 audit[4515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.861000 audit: BPF prog-id=226 op=UNLOAD Jan 20 23:56:25.861000 audit[4515]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.861000 audit: BPF prog-id=228 op=LOAD Jan 20 23:56:25.861000 audit[4515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=4476 pid=4515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:25.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733353264316666366530363738333532373233393031346433363965 Jan 20 23:56:25.891248 containerd[1592]: time="2026-01-20T23:56:25.891104458Z" level=info msg="StartContainer for \"7352d1ff6e06783527239014d369e7f85adade0ecb06c4848a539a43a38b1a43\" returns successfully" Jan 20 23:56:26.455078 containerd[1592]: time="2026-01-20T23:56:26.454964414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fmt6r,Uid:867a1dc3-f4d9-4cba-a9b8-47adcf051929,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:56:26.590087 systemd-networkd[1481]: calif799a525463: Link UP Jan 20 23:56:26.590512 systemd-networkd[1481]: calif799a525463: Gained carrier Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.505 [INFO][4552] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0 calico-apiserver-799c555658- calico-apiserver 867a1dc3-f4d9-4cba-a9b8-47adcf051929 791 0 2026-01-20 23:55:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799c555658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 calico-apiserver-799c555658-fmt6r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif799a525463 [] [] }} ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.506 [INFO][4552] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.535 [INFO][4564] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" HandleID="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.536 [INFO][4564] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" HandleID="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"calico-apiserver-799c555658-fmt6r", "timestamp":"2026-01-20 23:56:26.53587884 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.536 [INFO][4564] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.536 [INFO][4564] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.536 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.550 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.556 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.561 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.563 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.566 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.566 [INFO][4564] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.568 [INFO][4564] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.572 [INFO][4564] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.582 [INFO][4564] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.197/26] block=192.168.112.192/26 handle="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.582 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.197/26] handle="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.582 [INFO][4564] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:26.616843 containerd[1592]: 2026-01-20 23:56:26.582 [INFO][4564] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.197/26] IPv6=[] ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" HandleID="k8s-pod-network.787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.617657 containerd[1592]: 2026-01-20 23:56:26.586 [INFO][4552] cni-plugin/k8s.go 418: Populated endpoint ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0", GenerateName:"calico-apiserver-799c555658-", Namespace:"calico-apiserver", SelfLink:"", UID:"867a1dc3-f4d9-4cba-a9b8-47adcf051929", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799c555658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"calico-apiserver-799c555658-fmt6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif799a525463", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:26.617657 containerd[1592]: 2026-01-20 23:56:26.586 [INFO][4552] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.197/32] ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.617657 containerd[1592]: 2026-01-20 23:56:26.586 [INFO][4552] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif799a525463 ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.617657 containerd[1592]: 2026-01-20 23:56:26.588 [INFO][4552] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.617657 containerd[1592]: 2026-01-20 23:56:26.591 [INFO][4552] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0", GenerateName:"calico-apiserver-799c555658-", Namespace:"calico-apiserver", SelfLink:"", UID:"867a1dc3-f4d9-4cba-a9b8-47adcf051929", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799c555658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb", Pod:"calico-apiserver-799c555658-fmt6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif799a525463", MAC:"b6:96:36:b2:ba:fe", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:26.617657 containerd[1592]: 2026-01-20 23:56:26.610 [INFO][4552] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fmt6r" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fmt6r-eth0" Jan 20 23:56:26.631000 audit[4578]: NETFILTER_CFG table=filter:134 family=2 entries=58 op=nft_register_chain pid=4578 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:26.631000 audit[4578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30568 a0=3 a1=ffffc5df0c30 a2=0 a3=ffff90e37fa8 items=0 ppid=4160 pid=4578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.631000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:26.641959 containerd[1592]: time="2026-01-20T23:56:26.641871616Z" level=info msg="connecting to shim 787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb" address="unix:///run/containerd/s/b16e3682e59a584826532e7cd5aba16131253b30f3ebed513c51f2e2107b2749" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:26.674645 systemd[1]: Started cri-containerd-787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb.scope - libcontainer container 787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb. Jan 20 23:56:26.677689 systemd-networkd[1481]: cali52706933fe1: Gained IPv6LL Jan 20 23:56:26.690616 kubelet[2824]: E0120 23:56:26.690510 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:26.724132 kubelet[2824]: I0120 23:56:26.722499 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9nftp" podStartSLOduration=43.722478921 podStartE2EDuration="43.722478921s" podCreationTimestamp="2026-01-20 23:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:56:26.720884556 +0000 UTC m=+51.390637971" watchObservedRunningTime="2026-01-20 23:56:26.722478921 +0000 UTC m=+51.392232296" Jan 20 23:56:26.752000 audit[4620]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:26.752000 audit[4620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffef74caf0 a2=0 a3=1 items=0 ppid=2925 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.752000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:26.773000 audit[4620]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4620 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:26.773000 audit[4620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffef74caf0 a2=0 a3=1 items=0 ppid=2925 pid=4620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.773000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:26.786000 audit: BPF prog-id=229 op=LOAD Jan 20 23:56:26.787000 audit: BPF prog-id=230 op=LOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.787000 audit: BPF prog-id=230 op=UNLOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.787000 audit: BPF prog-id=231 op=LOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.787000 audit: BPF prog-id=232 op=LOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.787000 audit: BPF prog-id=232 op=UNLOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.787000 audit: BPF prog-id=231 op=UNLOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.787000 audit: BPF prog-id=233 op=LOAD Jan 20 23:56:26.787000 audit[4599]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4587 pid=4599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3738376437663132633935383232636261666366646439346361313565 Jan 20 23:56:26.807000 audit[4623]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:26.807000 audit[4623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc625c810 a2=0 a3=1 items=0 ppid=2925 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:26.814000 audit[4623]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_register_chain pid=4623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:26.814000 audit[4623]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc625c810 a2=0 a3=1 items=0 ppid=2925 pid=4623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.814000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:26.825681 containerd[1592]: time="2026-01-20T23:56:26.825607649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fmt6r,Uid:867a1dc3-f4d9-4cba-a9b8-47adcf051929,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"787d7f12c95822cbafcfdd94ca15e86e86b30de4157aebed141f1edeeaa05fdb\"" Jan 20 23:56:26.829301 containerd[1592]: time="2026-01-20T23:56:26.829202579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:56:27.061684 systemd-networkd[1481]: cali2b4cb438897: Gained IPv6LL Jan 20 23:56:27.169091 containerd[1592]: time="2026-01-20T23:56:27.168933858Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:27.170956 containerd[1592]: time="2026-01-20T23:56:27.170806983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:56:27.171137 containerd[1592]: time="2026-01-20T23:56:27.170916383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:27.171405 kubelet[2824]: E0120 23:56:27.171328 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:27.172264 kubelet[2824]: E0120 23:56:27.171426 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:27.172264 kubelet[2824]: E0120 23:56:27.171727 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgrbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:27.173201 kubelet[2824]: E0120 23:56:27.172960 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:56:27.456033 containerd[1592]: time="2026-01-20T23:56:27.455700209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fhgzx,Uid:3f49a160-e207-435e-86a4-138a2a624ffb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:56:27.456834 containerd[1592]: time="2026-01-20T23:56:27.456788972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqqpq,Uid:1dae76fe-b281-4e92-8149-0d08a51ef82b,Namespace:kube-system,Attempt:0,}" Jan 20 23:56:27.634032 systemd-networkd[1481]: calieac4de63287: Link UP Jan 20 23:56:27.636066 systemd-networkd[1481]: calieac4de63287: Gained carrier Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.521 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0 coredns-668d6bf9bc- kube-system 1dae76fe-b281-4e92-8149-0d08a51ef82b 800 0 2026-01-20 23:55:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 coredns-668d6bf9bc-hqqpq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieac4de63287 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.521 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.560 [INFO][4662] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" HandleID="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Workload="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.560 [INFO][4662] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" HandleID="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Workload="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"coredns-668d6bf9bc-hqqpq", "timestamp":"2026-01-20 23:56:27.560276283 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.560 [INFO][4662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.560 [INFO][4662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.560 [INFO][4662] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.581 [INFO][4662] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.589 [INFO][4662] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.596 [INFO][4662] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.598 [INFO][4662] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.602 [INFO][4662] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.602 [INFO][4662] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.604 [INFO][4662] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0 Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.610 [INFO][4662] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.620 [INFO][4662] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.198/26] block=192.168.112.192/26 handle="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.620 [INFO][4662] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.198/26] handle="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.620 [INFO][4662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:27.654207 containerd[1592]: 2026-01-20 23:56:27.620 [INFO][4662] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.198/26] IPv6=[] ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" HandleID="k8s-pod-network.f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Workload="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.655175 containerd[1592]: 2026-01-20 23:56:27.626 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1dae76fe-b281-4e92-8149-0d08a51ef82b", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"coredns-668d6bf9bc-hqqpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieac4de63287", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:27.655175 containerd[1592]: 2026-01-20 23:56:27.626 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.198/32] ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.655175 containerd[1592]: 2026-01-20 23:56:27.626 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieac4de63287 ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.655175 containerd[1592]: 2026-01-20 23:56:27.636 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.655175 containerd[1592]: 2026-01-20 23:56:27.637 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1dae76fe-b281-4e92-8149-0d08a51ef82b", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0", Pod:"coredns-668d6bf9bc-hqqpq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieac4de63287", MAC:"a2:f6:3c:34:f8:53", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:27.655175 containerd[1592]: 2026-01-20 23:56:27.651 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" Namespace="kube-system" Pod="coredns-668d6bf9bc-hqqpq" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-coredns--668d6bf9bc--hqqpq-eth0" Jan 20 23:56:27.689000 audit[4693]: NETFILTER_CFG table=filter:139 family=2 entries=44 op=nft_register_chain pid=4693 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:27.689000 audit[4693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21516 a0=3 a1=ffffd42c6f50 a2=0 a3=ffff8f604fa8 items=0 ppid=4160 pid=4693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.689000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:27.696453 containerd[1592]: time="2026-01-20T23:56:27.696353839Z" level=info msg="connecting to shim f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0" address="unix:///run/containerd/s/008a793bac1e8267c9a5a991fe1df528259debf3d648ed370fc607c0aabc6557" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:27.702596 kubelet[2824]: E0120 23:56:27.702405 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:56:27.757575 systemd[1]: Started cri-containerd-f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0.scope - libcontainer container f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0. Jan 20 23:56:27.760189 systemd-networkd[1481]: cali2d7d88f196a: Link UP Jan 20 23:56:27.761130 systemd-networkd[1481]: cali2d7d88f196a: Gained carrier Jan 20 23:56:27.766178 systemd-networkd[1481]: calif799a525463: Gained IPv6LL Jan 20 23:56:27.791000 audit: BPF prog-id=234 op=LOAD Jan 20 23:56:27.792000 audit: BPF prog-id=235 op=LOAD Jan 20 23:56:27.792000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.792000 audit: BPF prog-id=235 op=UNLOAD Jan 20 23:56:27.792000 audit[4707]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.792000 audit: BPF prog-id=236 op=LOAD Jan 20 23:56:27.792000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.792000 audit: BPF prog-id=237 op=LOAD Jan 20 23:56:27.792000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.792000 audit: BPF prog-id=237 op=UNLOAD Jan 20 23:56:27.792000 audit[4707]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.792000 audit: BPF prog-id=236 op=UNLOAD Jan 20 23:56:27.792000 audit[4707]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.793000 audit: BPF prog-id=238 op=LOAD Jan 20 23:56:27.793000 audit[4707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4695 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637336233373230393438326665323061323662366163613034353261 Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.522 [INFO][4636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0 calico-apiserver-799c555658- calico-apiserver 3f49a160-e207-435e-86a4-138a2a624ffb 801 0 2026-01-20 23:55:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:799c555658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 calico-apiserver-799c555658-fhgzx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d7d88f196a [] [] }} ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.523 [INFO][4636] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.572 [INFO][4665] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" HandleID="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.572 [INFO][4665] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" HandleID="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b0b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"calico-apiserver-799c555658-fhgzx", "timestamp":"2026-01-20 23:56:27.572015953 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.572 [INFO][4665] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.620 [INFO][4665] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.620 [INFO][4665] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.682 [INFO][4665] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.691 [INFO][4665] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.701 [INFO][4665] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.708 [INFO][4665] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.713 [INFO][4665] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.713 [INFO][4665] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.724 [INFO][4665] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.734 [INFO][4665] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.747 [INFO][4665] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.199/26] block=192.168.112.192/26 handle="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.747 [INFO][4665] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.199/26] handle="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.747 [INFO][4665] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:27.796918 containerd[1592]: 2026-01-20 23:56:27.747 [INFO][4665] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.199/26] IPv6=[] ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" HandleID="k8s-pod-network.a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.799298 containerd[1592]: 2026-01-20 23:56:27.751 [INFO][4636] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0", GenerateName:"calico-apiserver-799c555658-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f49a160-e207-435e-86a4-138a2a624ffb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799c555658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"calico-apiserver-799c555658-fhgzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d7d88f196a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:27.799298 containerd[1592]: 2026-01-20 23:56:27.751 [INFO][4636] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.199/32] ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.799298 containerd[1592]: 2026-01-20 23:56:27.751 [INFO][4636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d7d88f196a ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.799298 containerd[1592]: 2026-01-20 23:56:27.762 [INFO][4636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.799298 containerd[1592]: 2026-01-20 23:56:27.764 [INFO][4636] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0", GenerateName:"calico-apiserver-799c555658-", Namespace:"calico-apiserver", SelfLink:"", UID:"3f49a160-e207-435e-86a4-138a2a624ffb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 55, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"799c555658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc", Pod:"calico-apiserver-799c555658-fhgzx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d7d88f196a", MAC:"8a:2d:ed:4d:5c:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:27.799298 containerd[1592]: 2026-01-20 23:56:27.787 [INFO][4636] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" Namespace="calico-apiserver" Pod="calico-apiserver-799c555658-fhgzx" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--apiserver--799c555658--fhgzx-eth0" Jan 20 23:56:27.824000 audit[4736]: NETFILTER_CFG table=filter:140 family=2 entries=59 op=nft_register_chain pid=4736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:27.824000 audit[4736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29476 a0=3 a1=ffffd367ab80 a2=0 a3=ffffa79c3fa8 items=0 ppid=4160 pid=4736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.824000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:27.840153 containerd[1592]: time="2026-01-20T23:56:27.840020535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hqqpq,Uid:1dae76fe-b281-4e92-8149-0d08a51ef82b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0\"" Jan 20 23:56:27.846410 containerd[1592]: time="2026-01-20T23:56:27.846367152Z" level=info msg="connecting to shim a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc" address="unix:///run/containerd/s/dedc28a7b6965d99d961559428399991efb4d08b4f283dd053da06606bff5c38" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:27.847056 containerd[1592]: time="2026-01-20T23:56:27.846416232Z" level=info msg="CreateContainer within sandbox \"f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 23:56:27.861000 audit[4759]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:27.861000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb1e92d0 a2=0 a3=1 items=0 ppid=2925 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.861000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:27.872000 audit[4759]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=4759 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:27.872000 audit[4759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcb1e92d0 a2=0 a3=1 items=0 ppid=2925 pid=4759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:27.886664 containerd[1592]: time="2026-01-20T23:56:27.885932415Z" level=info msg="Container 35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:27.900144 containerd[1592]: time="2026-01-20T23:56:27.900099012Z" level=info msg="CreateContainer within sandbox \"f73b37209482fe20a26b6aca0452af05225a6e82a2e76d5ce893ab11c53b37f0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58\"" Jan 20 23:56:27.901273 containerd[1592]: time="2026-01-20T23:56:27.901223335Z" level=info msg="StartContainer for \"35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58\"" Jan 20 23:56:27.902929 containerd[1592]: time="2026-01-20T23:56:27.902883460Z" level=info msg="connecting to shim 35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58" address="unix:///run/containerd/s/008a793bac1e8267c9a5a991fe1df528259debf3d648ed370fc607c0aabc6557" protocol=ttrpc version=3 Jan 20 23:56:27.905470 systemd[1]: Started cri-containerd-a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc.scope - libcontainer container a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc. Jan 20 23:56:27.933321 systemd[1]: Started cri-containerd-35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58.scope - libcontainer container 35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58. Jan 20 23:56:27.938000 audit: BPF prog-id=239 op=LOAD Jan 20 23:56:27.938000 audit: BPF prog-id=240 op=LOAD Jan 20 23:56:27.938000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.939000 audit: BPF prog-id=240 op=UNLOAD Jan 20 23:56:27.939000 audit[4763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.939000 audit: BPF prog-id=241 op=LOAD Jan 20 23:56:27.939000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.939000 audit: BPF prog-id=242 op=LOAD Jan 20 23:56:27.939000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.939000 audit: BPF prog-id=242 op=UNLOAD Jan 20 23:56:27.939000 audit[4763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.939000 audit: BPF prog-id=241 op=UNLOAD Jan 20 23:56:27.939000 audit[4763]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.939000 audit: BPF prog-id=243 op=LOAD Jan 20 23:56:27.939000 audit[4763]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4751 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.939000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136613838633235373536613331613831353966663835303233346266 Jan 20 23:56:27.953000 audit: BPF prog-id=244 op=LOAD Jan 20 23:56:27.954000 audit: BPF prog-id=245 op=LOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.954000 audit: BPF prog-id=245 op=UNLOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.954000 audit: BPF prog-id=246 op=LOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.954000 audit: BPF prog-id=247 op=LOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.954000 audit: BPF prog-id=247 op=UNLOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.954000 audit: BPF prog-id=246 op=UNLOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.954000 audit: BPF prog-id=248 op=LOAD Jan 20 23:56:27.954000 audit[4775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4695 pid=4775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:27.954000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3335653737303334613633616635613237333963656263366664303033 Jan 20 23:56:27.995198 containerd[1592]: time="2026-01-20T23:56:27.995155981Z" level=info msg="StartContainer for \"35e77034a63af5a2739cebc6fd003ae55cd14957e0a83f4746f4f508d7e2be58\" returns successfully" Jan 20 23:56:28.013449 containerd[1592]: time="2026-01-20T23:56:28.013243947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-799c555658-fhgzx,Uid:3f49a160-e207-435e-86a4-138a2a624ffb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a6a88c25756a31a8159ff850234bf57b2f2ee19eba00a43136cde789ac48d8cc\"" Jan 20 23:56:28.021849 containerd[1592]: time="2026-01-20T23:56:28.021774927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:56:28.350134 containerd[1592]: time="2026-01-20T23:56:28.349807133Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:28.351835 containerd[1592]: time="2026-01-20T23:56:28.351701897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:56:28.351835 containerd[1592]: time="2026-01-20T23:56:28.351759377Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:28.352201 kubelet[2824]: E0120 23:56:28.352154 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:28.352343 kubelet[2824]: E0120 23:56:28.352211 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:28.352343 kubelet[2824]: E0120 23:56:28.352350 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfhtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:28.354202 kubelet[2824]: E0120 23:56:28.354155 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:56:28.455577 containerd[1592]: time="2026-01-20T23:56:28.455460112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5446b598c6-knjcl,Uid:dddef414-cca7-4fb7-84c5-239896cb0ee3,Namespace:calico-system,Attempt:0,}" Jan 20 23:56:28.597300 systemd-networkd[1481]: cali47b857e7a9f: Link UP Jan 20 23:56:28.598316 systemd-networkd[1481]: cali47b857e7a9f: Gained carrier Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.498 [INFO][4822] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0 calico-kube-controllers-5446b598c6- calico-system dddef414-cca7-4fb7-84c5-239896cb0ee3 799 0 2026-01-20 23:56:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5446b598c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-n-f640cc67e1 calico-kube-controllers-5446b598c6-knjcl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali47b857e7a9f [] [] }} ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.498 [INFO][4822] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.529 [INFO][4834] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" HandleID="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.529 [INFO][4834] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" HandleID="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-f640cc67e1", "pod":"calico-kube-controllers-5446b598c6-knjcl", "timestamp":"2026-01-20 23:56:28.529676934 +0000 UTC"}, Hostname:"ci-4547-0-0-n-f640cc67e1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.529 [INFO][4834] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.530 [INFO][4834] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.530 [INFO][4834] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-f640cc67e1' Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.547 [INFO][4834] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.557 [INFO][4834] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.566 [INFO][4834] ipam/ipam.go 511: Trying affinity for 192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.570 [INFO][4834] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.574 [INFO][4834] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.192/26 host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.574 [INFO][4834] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.192/26 handle="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.576 [INFO][4834] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77 Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.582 [INFO][4834] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.192/26 handle="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.590 [INFO][4834] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.200/26] block=192.168.112.192/26 handle="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.590 [INFO][4834] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.200/26] handle="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" host="ci-4547-0-0-n-f640cc67e1" Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.591 [INFO][4834] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:56:28.625636 containerd[1592]: 2026-01-20 23:56:28.591 [INFO][4834] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.200/26] IPv6=[] ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" HandleID="k8s-pod-network.41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Workload="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.629429 containerd[1592]: 2026-01-20 23:56:28.593 [INFO][4822] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0", GenerateName:"calico-kube-controllers-5446b598c6-", Namespace:"calico-system", SelfLink:"", UID:"dddef414-cca7-4fb7-84c5-239896cb0ee3", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5446b598c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"", Pod:"calico-kube-controllers-5446b598c6-knjcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47b857e7a9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:28.629429 containerd[1592]: 2026-01-20 23:56:28.593 [INFO][4822] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.200/32] ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.629429 containerd[1592]: 2026-01-20 23:56:28.593 [INFO][4822] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali47b857e7a9f ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.629429 containerd[1592]: 2026-01-20 23:56:28.598 [INFO][4822] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.629429 containerd[1592]: 2026-01-20 23:56:28.599 [INFO][4822] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0", GenerateName:"calico-kube-controllers-5446b598c6-", Namespace:"calico-system", SelfLink:"", UID:"dddef414-cca7-4fb7-84c5-239896cb0ee3", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 56, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5446b598c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-f640cc67e1", ContainerID:"41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77", Pod:"calico-kube-controllers-5446b598c6-knjcl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali47b857e7a9f", MAC:"ca:8d:85:71:81:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:56:28.629429 containerd[1592]: 2026-01-20 23:56:28.617 [INFO][4822] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" Namespace="calico-system" Pod="calico-kube-controllers-5446b598c6-knjcl" WorkloadEndpoint="ci--4547--0--0--n--f640cc67e1-k8s-calico--kube--controllers--5446b598c6--knjcl-eth0" Jan 20 23:56:28.642000 audit[4847]: NETFILTER_CFG table=filter:143 family=2 entries=52 op=nft_register_chain pid=4847 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:56:28.642000 audit[4847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24296 a0=3 a1=ffffe2dcd090 a2=0 a3=ffff9724cfa8 items=0 ppid=4160 pid=4847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.642000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:56:28.653653 containerd[1592]: time="2026-01-20T23:56:28.653147397Z" level=info msg="connecting to shim 41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77" address="unix:///run/containerd/s/7fc775eeec403019773c8993c6c5af36440351e2e021fecdf0b450bf2ce39c19" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:56:28.691447 systemd[1]: Started cri-containerd-41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77.scope - libcontainer container 41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77. Jan 20 23:56:28.706703 kubelet[2824]: E0120 23:56:28.706666 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:56:28.715312 kubelet[2824]: E0120 23:56:28.715025 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:56:28.735000 audit: BPF prog-id=249 op=LOAD Jan 20 23:56:28.737711 kernel: kauditd_printk_skb: 211 callbacks suppressed Jan 20 23:56:28.737788 kernel: audit: type=1334 audit(1768953388.735:729): prog-id=249 op=LOAD Jan 20 23:56:28.738000 audit: BPF prog-id=250 op=LOAD Jan 20 23:56:28.745018 kernel: audit: type=1334 audit(1768953388.738:730): prog-id=250 op=LOAD Jan 20 23:56:28.745086 kernel: audit: type=1300 audit(1768953388.738:730): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.738000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.749802 kernel: audit: type=1327 audit(1768953388.738:730): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.738000 audit: BPF prog-id=250 op=UNLOAD Jan 20 23:56:28.751189 kernel: audit: type=1334 audit(1768953388.738:731): prog-id=250 op=UNLOAD Jan 20 23:56:28.738000 audit[4868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.754817 kernel: audit: type=1300 audit(1768953388.738:731): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.759102 kernel: audit: type=1327 audit(1768953388.738:731): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.738000 audit: BPF prog-id=251 op=LOAD Jan 20 23:56:28.760145 kernel: audit: type=1334 audit(1768953388.738:732): prog-id=251 op=LOAD Jan 20 23:56:28.738000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.762582 kernel: audit: type=1300 audit(1768953388.738:732): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.738000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.767174 kernel: audit: type=1327 audit(1768953388.738:732): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.740000 audit: BPF prog-id=252 op=LOAD Jan 20 23:56:28.740000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.740000 audit: BPF prog-id=252 op=UNLOAD Jan 20 23:56:28.740000 audit[4868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.740000 audit: BPF prog-id=251 op=UNLOAD Jan 20 23:56:28.740000 audit[4868]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.740000 audit: BPF prog-id=253 op=LOAD Jan 20 23:56:28.740000 audit[4868]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4856 pid=4868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3431303033653038616532363838656564383238396261663834326137 Jan 20 23:56:28.767000 audit[4888]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:28.767000 audit[4888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffdc77eba0 a2=0 a3=1 items=0 ppid=2925 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:28.777000 audit[4888]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4888 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:28.777000 audit[4888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdc77eba0 a2=0 a3=1 items=0 ppid=2925 pid=4888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:28.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:28.794301 kubelet[2824]: I0120 23:56:28.794178 2824 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hqqpq" podStartSLOduration=45.794157903 podStartE2EDuration="45.794157903s" podCreationTimestamp="2026-01-20 23:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:56:28.792400899 +0000 UTC m=+53.462154314" watchObservedRunningTime="2026-01-20 23:56:28.794157903 +0000 UTC m=+53.463911318" Jan 20 23:56:28.820279 containerd[1592]: time="2026-01-20T23:56:28.820191207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5446b598c6-knjcl,Uid:dddef414-cca7-4fb7-84c5-239896cb0ee3,Namespace:calico-system,Attempt:0,} returns sandbox id \"41003e08ae2688eed8289baf842a7cb58d21ff42d1021acd97167acc3152ed77\"" Jan 20 23:56:28.824827 containerd[1592]: time="2026-01-20T23:56:28.824754258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:56:28.917249 systemd-networkd[1481]: calieac4de63287: Gained IPv6LL Jan 20 23:56:29.162634 containerd[1592]: time="2026-01-20T23:56:29.162552103Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:29.163988 containerd[1592]: time="2026-01-20T23:56:29.163875906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:56:29.164271 containerd[1592]: time="2026-01-20T23:56:29.163940706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:29.164338 kubelet[2824]: E0120 23:56:29.164197 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:56:29.164338 kubelet[2824]: E0120 23:56:29.164289 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:56:29.164475 kubelet[2824]: E0120 23:56:29.164424 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:29.165945 kubelet[2824]: E0120 23:56:29.165787 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:56:29.174482 systemd-networkd[1481]: cali2d7d88f196a: Gained IPv6LL Jan 20 23:56:29.719338 kubelet[2824]: E0120 23:56:29.718270 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:56:29.719338 kubelet[2824]: E0120 23:56:29.718277 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:56:29.796000 audit[4896]: NETFILTER_CFG table=filter:146 family=2 entries=14 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:29.796000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe68846a0 a2=0 a3=1 items=0 ppid=2925 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:29.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:29.807000 audit[4896]: NETFILTER_CFG table=nat:147 family=2 entries=56 op=nft_register_chain pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:56:29.807000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe68846a0 a2=0 a3=1 items=0 ppid=2925 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:29.807000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:56:29.814877 systemd-networkd[1481]: cali47b857e7a9f: Gained IPv6LL Jan 20 23:56:30.724388 kubelet[2824]: E0120 23:56:30.724211 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:56:34.124719 systemd[1]: Started sshd@8-188.245.60.37:22-104.248.242.212:6103.service - OpenSSH per-connection server daemon (104.248.242.212:6103). Jan 20 23:56:34.128988 kernel: kauditd_printk_skb: 24 callbacks suppressed Jan 20 23:56:34.129026 kernel: audit: type=1130 audit(1768953394.123:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.60.37:22-104.248.242.212:6103 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:34.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.60.37:22-104.248.242.212:6103 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:34.199859 sshd[4907]: Unable to negotiate with 104.248.242.212 port 6103: no matching key exchange method found. Their offer: diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group-exchange-sha256 [preauth] Jan 20 23:56:34.202922 systemd[1]: sshd@8-188.245.60.37:22-104.248.242.212:6103.service: Deactivated successfully. Jan 20 23:56:34.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.60.37:22-104.248.242.212:6103 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:34.207138 kernel: audit: type=1131 audit(1768953394.202:742): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.60.37:22-104.248.242.212:6103 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:34.456688 containerd[1592]: time="2026-01-20T23:56:34.456533284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:56:34.807870 containerd[1592]: time="2026-01-20T23:56:34.807670229Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:34.809257 containerd[1592]: time="2026-01-20T23:56:34.809165071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:56:34.809563 containerd[1592]: time="2026-01-20T23:56:34.809351272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:34.809890 kubelet[2824]: E0120 23:56:34.809810 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:56:34.809890 kubelet[2824]: E0120 23:56:34.809882 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:56:34.810390 kubelet[2824]: E0120 23:56:34.810012 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad3fd4707e1b4e4592bbc8d326d656a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:34.812442 containerd[1592]: time="2026-01-20T23:56:34.812183116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:56:35.175590 containerd[1592]: time="2026-01-20T23:56:35.175333183Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:35.178072 containerd[1592]: time="2026-01-20T23:56:35.177744227Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:56:35.178282 containerd[1592]: time="2026-01-20T23:56:35.177835347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:35.178581 kubelet[2824]: E0120 23:56:35.178475 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:56:35.178770 kubelet[2824]: E0120 23:56:35.178566 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:56:35.178770 kubelet[2824]: E0120 23:56:35.178725 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:35.179895 kubelet[2824]: E0120 23:56:35.179856 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:56:37.456827 containerd[1592]: time="2026-01-20T23:56:37.455663468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:56:37.788825 containerd[1592]: time="2026-01-20T23:56:37.788752643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:37.791650 containerd[1592]: time="2026-01-20T23:56:37.791558364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:37.792079 containerd[1592]: time="2026-01-20T23:56:37.791925564Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:56:37.794063 kubelet[2824]: E0120 23:56:37.793266 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:56:37.794063 kubelet[2824]: E0120 23:56:37.793351 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:56:37.794063 kubelet[2824]: E0120 23:56:37.793537 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:37.800059 containerd[1592]: time="2026-01-20T23:56:37.800009207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:56:38.140506 containerd[1592]: time="2026-01-20T23:56:38.140256662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:38.141821 containerd[1592]: time="2026-01-20T23:56:38.141689623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:56:38.141978 containerd[1592]: time="2026-01-20T23:56:38.141756783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:38.142142 kubelet[2824]: E0120 23:56:38.142082 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:56:38.142257 kubelet[2824]: E0120 23:56:38.142163 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:56:38.142542 kubelet[2824]: E0120 23:56:38.142460 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:38.144796 kubelet[2824]: E0120 23:56:38.144729 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:40.456265 containerd[1592]: time="2026-01-20T23:56:40.455894529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:56:40.795945 containerd[1592]: time="2026-01-20T23:56:40.795697538Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:40.797474 containerd[1592]: time="2026-01-20T23:56:40.797288299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:56:40.797474 containerd[1592]: time="2026-01-20T23:56:40.797413099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:40.797730 kubelet[2824]: E0120 23:56:40.797658 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:56:40.798115 kubelet[2824]: E0120 23:56:40.797747 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:56:40.798115 kubelet[2824]: E0120 23:56:40.797935 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qdtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:40.799526 kubelet[2824]: E0120 23:56:40.799428 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:56:43.458091 containerd[1592]: time="2026-01-20T23:56:43.458005285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:56:43.801519 containerd[1592]: time="2026-01-20T23:56:43.801451688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:43.803057 containerd[1592]: time="2026-01-20T23:56:43.802977888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:56:43.803356 containerd[1592]: time="2026-01-20T23:56:43.803018648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:43.804121 kubelet[2824]: E0120 23:56:43.803502 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:43.804121 kubelet[2824]: E0120 23:56:43.803557 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:43.804121 kubelet[2824]: E0120 23:56:43.803682 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgrbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:43.805149 kubelet[2824]: E0120 23:56:43.805085 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:56:45.457714 containerd[1592]: time="2026-01-20T23:56:45.457446595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:56:45.801286 containerd[1592]: time="2026-01-20T23:56:45.801232233Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:45.803008 containerd[1592]: time="2026-01-20T23:56:45.802935873Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:45.803386 containerd[1592]: time="2026-01-20T23:56:45.803033153Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:56:45.803845 kubelet[2824]: E0120 23:56:45.803804 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:56:45.804467 kubelet[2824]: E0120 23:56:45.804243 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:56:45.804745 kubelet[2824]: E0120 23:56:45.804638 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:45.804957 containerd[1592]: time="2026-01-20T23:56:45.804919154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:56:45.806663 kubelet[2824]: E0120 23:56:45.806631 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:56:46.140516 containerd[1592]: time="2026-01-20T23:56:46.140102069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:46.141785 containerd[1592]: time="2026-01-20T23:56:46.141713030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:56:46.141979 containerd[1592]: time="2026-01-20T23:56:46.141855030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:46.142359 kubelet[2824]: E0120 23:56:46.142304 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:46.142438 kubelet[2824]: E0120 23:56:46.142371 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:46.142598 kubelet[2824]: E0120 23:56:46.142542 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfhtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:46.144157 kubelet[2824]: E0120 23:56:46.144110 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:56:47.458080 kubelet[2824]: E0120 23:56:47.457695 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:56:50.455527 kubelet[2824]: E0120 23:56:50.454980 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:56:55.456290 kubelet[2824]: E0120 23:56:55.456199 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:56:56.455971 kubelet[2824]: E0120 23:56:56.455888 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:56:58.455557 kubelet[2824]: E0120 23:56:58.455500 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:56:59.458867 kubelet[2824]: E0120 23:56:59.458476 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:57:01.457549 containerd[1592]: time="2026-01-20T23:57:01.457350010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:57:01.802390 containerd[1592]: time="2026-01-20T23:57:01.802280981Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:01.805725 containerd[1592]: time="2026-01-20T23:57:01.804422182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:57:01.805862 containerd[1592]: time="2026-01-20T23:57:01.804473422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:01.806201 kubelet[2824]: E0120 23:57:01.806073 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:57:01.806201 kubelet[2824]: E0120 23:57:01.806181 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:57:01.807171 kubelet[2824]: E0120 23:57:01.807077 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad3fd4707e1b4e4592bbc8d326d656a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:01.811660 containerd[1592]: time="2026-01-20T23:57:01.811618503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:57:02.128551 containerd[1592]: time="2026-01-20T23:57:02.128281350Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:02.130634 containerd[1592]: time="2026-01-20T23:57:02.130492870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:57:02.130634 containerd[1592]: time="2026-01-20T23:57:02.130507150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:02.131059 kubelet[2824]: E0120 23:57:02.130923 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:57:02.131059 kubelet[2824]: E0120 23:57:02.131008 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:57:02.131532 kubelet[2824]: E0120 23:57:02.131346 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:02.132910 kubelet[2824]: E0120 23:57:02.132848 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:57:04.458054 containerd[1592]: time="2026-01-20T23:57:04.457979404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:57:04.798456 containerd[1592]: time="2026-01-20T23:57:04.798403971Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:04.800493 containerd[1592]: time="2026-01-20T23:57:04.800140811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:04.800830 containerd[1592]: time="2026-01-20T23:57:04.800683371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:57:04.801192 kubelet[2824]: E0120 23:57:04.801144 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:57:04.802597 kubelet[2824]: E0120 23:57:04.801728 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:57:04.803885 kubelet[2824]: E0120 23:57:04.803831 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:04.806435 containerd[1592]: time="2026-01-20T23:57:04.806366652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:57:05.153020 containerd[1592]: time="2026-01-20T23:57:05.152355540Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:05.155080 containerd[1592]: time="2026-01-20T23:57:05.154790340Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:57:05.155903 containerd[1592]: time="2026-01-20T23:57:05.154840540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:05.156250 kubelet[2824]: E0120 23:57:05.156186 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:57:05.156346 kubelet[2824]: E0120 23:57:05.156264 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:57:05.156444 kubelet[2824]: E0120 23:57:05.156400 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:05.158418 kubelet[2824]: E0120 23:57:05.157680 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:57:08.456435 containerd[1592]: time="2026-01-20T23:57:08.456337375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:57:08.801254 containerd[1592]: time="2026-01-20T23:57:08.800993178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:08.802559 containerd[1592]: time="2026-01-20T23:57:08.802514298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:57:08.802777 containerd[1592]: time="2026-01-20T23:57:08.802700258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:08.802997 kubelet[2824]: E0120 23:57:08.802902 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:57:08.802997 kubelet[2824]: E0120 23:57:08.802953 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:57:08.804646 kubelet[2824]: E0120 23:57:08.804576 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgrbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:08.805806 kubelet[2824]: E0120 23:57:08.805768 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:57:09.460319 containerd[1592]: time="2026-01-20T23:57:09.459994260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:57:09.829781 containerd[1592]: time="2026-01-20T23:57:09.829122025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:09.830988 containerd[1592]: time="2026-01-20T23:57:09.830942386Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:57:09.831187 containerd[1592]: time="2026-01-20T23:57:09.831028346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:09.832258 kubelet[2824]: E0120 23:57:09.832210 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:57:09.832609 kubelet[2824]: E0120 23:57:09.832268 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:57:09.832609 kubelet[2824]: E0120 23:57:09.832403 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qdtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:09.833980 kubelet[2824]: E0120 23:57:09.833922 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:57:11.459090 containerd[1592]: time="2026-01-20T23:57:11.457717621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:57:11.813359 containerd[1592]: time="2026-01-20T23:57:11.813308543Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:11.814925 containerd[1592]: time="2026-01-20T23:57:11.814872543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:57:11.815064 containerd[1592]: time="2026-01-20T23:57:11.814963423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:11.816074 kubelet[2824]: E0120 23:57:11.815531 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:57:11.816074 kubelet[2824]: E0120 23:57:11.815616 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:57:11.816074 kubelet[2824]: E0120 23:57:11.815748 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:11.817221 kubelet[2824]: E0120 23:57:11.817160 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:57:12.458264 containerd[1592]: time="2026-01-20T23:57:12.458213658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:57:12.794436 containerd[1592]: time="2026-01-20T23:57:12.794229736Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:12.796087 containerd[1592]: time="2026-01-20T23:57:12.795997256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:57:12.796329 containerd[1592]: time="2026-01-20T23:57:12.796288536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:12.797142 kubelet[2824]: E0120 23:57:12.796547 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:57:12.797142 kubelet[2824]: E0120 23:57:12.796603 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:57:12.797142 kubelet[2824]: E0120 23:57:12.796723 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfhtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:12.798313 kubelet[2824]: E0120 23:57:12.798263 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:57:16.458502 kubelet[2824]: E0120 23:57:16.458439 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:57:19.458132 kubelet[2824]: E0120 23:57:19.457610 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:57:20.455920 kubelet[2824]: E0120 23:57:20.455822 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:57:25.457145 kubelet[2824]: E0120 23:57:25.456345 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:57:25.459068 kubelet[2824]: E0120 23:57:25.457898 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:57:27.457224 kubelet[2824]: E0120 23:57:27.456850 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:57:28.457255 kubelet[2824]: E0120 23:57:28.457200 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:57:31.460701 kubelet[2824]: E0120 23:57:31.460658 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:57:32.461331 kubelet[2824]: E0120 23:57:32.461259 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:57:38.457616 kubelet[2824]: E0120 23:57:38.457570 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:57:40.456883 kubelet[2824]: E0120 23:57:40.456843 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:57:40.457464 kubelet[2824]: E0120 23:57:40.456898 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:57:43.458587 containerd[1592]: time="2026-01-20T23:57:43.458161486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:57:43.803294 containerd[1592]: time="2026-01-20T23:57:43.803220588Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:43.804830 containerd[1592]: time="2026-01-20T23:57:43.804777708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:57:43.804946 containerd[1592]: time="2026-01-20T23:57:43.804873708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:43.806305 kubelet[2824]: E0120 23:57:43.806214 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:57:43.806305 kubelet[2824]: E0120 23:57:43.806269 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:57:43.806856 kubelet[2824]: E0120 23:57:43.806375 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad3fd4707e1b4e4592bbc8d326d656a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:43.810105 containerd[1592]: time="2026-01-20T23:57:43.809451628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:57:44.161854 containerd[1592]: time="2026-01-20T23:57:44.161732130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:44.163493 containerd[1592]: time="2026-01-20T23:57:44.163294930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:57:44.163493 containerd[1592]: time="2026-01-20T23:57:44.163397410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:44.163640 kubelet[2824]: E0120 23:57:44.163580 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:57:44.163640 kubelet[2824]: E0120 23:57:44.163627 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:57:44.164098 kubelet[2824]: E0120 23:57:44.163725 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:44.165262 kubelet[2824]: E0120 23:57:44.165218 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:57:46.455773 kubelet[2824]: E0120 23:57:46.455714 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:57:46.457389 containerd[1592]: time="2026-01-20T23:57:46.456774510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:57:46.790650 containerd[1592]: time="2026-01-20T23:57:46.790565970Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:46.792390 containerd[1592]: time="2026-01-20T23:57:46.792268211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:57:46.792490 containerd[1592]: time="2026-01-20T23:57:46.792417171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:46.792758 kubelet[2824]: E0120 23:57:46.792712 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:57:46.792965 kubelet[2824]: E0120 23:57:46.792936 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:57:46.793398 kubelet[2824]: E0120 23:57:46.793222 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:46.796219 containerd[1592]: time="2026-01-20T23:57:46.796183691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:57:47.130767 containerd[1592]: time="2026-01-20T23:57:47.130104391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:47.131941 containerd[1592]: time="2026-01-20T23:57:47.131741391Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:57:47.131941 containerd[1592]: time="2026-01-20T23:57:47.131855751Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:47.132299 kubelet[2824]: E0120 23:57:47.132233 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:57:47.132299 kubelet[2824]: E0120 23:57:47.132292 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:57:47.132498 kubelet[2824]: E0120 23:57:47.132418 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:47.133829 kubelet[2824]: E0120 23:57:47.133743 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:57:52.457333 kubelet[2824]: E0120 23:57:52.457283 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:57:52.458198 containerd[1592]: time="2026-01-20T23:57:52.458165616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:57:52.805749 containerd[1592]: time="2026-01-20T23:57:52.805693795Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:52.806925 containerd[1592]: time="2026-01-20T23:57:52.806851235Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:57:52.807099 containerd[1592]: time="2026-01-20T23:57:52.806964555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:52.807778 kubelet[2824]: E0120 23:57:52.807165 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:57:52.807778 kubelet[2824]: E0120 23:57:52.807223 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:57:52.807778 kubelet[2824]: E0120 23:57:52.807583 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qdtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:52.809000 kubelet[2824]: E0120 23:57:52.808937 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:57:53.457290 containerd[1592]: time="2026-01-20T23:57:53.457229270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:57:53.802272 containerd[1592]: time="2026-01-20T23:57:53.802221369Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:53.804205 containerd[1592]: time="2026-01-20T23:57:53.804148329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:57:53.804351 containerd[1592]: time="2026-01-20T23:57:53.804253409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:53.805066 kubelet[2824]: E0120 23:57:53.804534 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:57:53.805066 kubelet[2824]: E0120 23:57:53.804592 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:57:53.805066 kubelet[2824]: E0120 23:57:53.804722 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:53.806751 kubelet[2824]: E0120 23:57:53.806692 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:57:54.457977 kubelet[2824]: E0120 23:57:54.457918 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:57:58.456427 containerd[1592]: time="2026-01-20T23:57:58.455777654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:57:58.797968 containerd[1592]: time="2026-01-20T23:57:58.797660631Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:57:58.799457 containerd[1592]: time="2026-01-20T23:57:58.799369911Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:57:58.799635 containerd[1592]: time="2026-01-20T23:57:58.799481191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:57:58.800883 kubelet[2824]: E0120 23:57:58.800253 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:57:58.800883 kubelet[2824]: E0120 23:57:58.800319 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:57:58.800883 kubelet[2824]: E0120 23:57:58.800493 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgrbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:57:58.801928 kubelet[2824]: E0120 23:57:58.801753 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:58:00.457543 kubelet[2824]: E0120 23:58:00.457353 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:58:04.456932 kubelet[2824]: E0120 23:58:04.456802 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:58:05.456060 kubelet[2824]: E0120 23:58:05.455709 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:58:06.457911 containerd[1592]: time="2026-01-20T23:58:06.457861564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:58:06.803240 containerd[1592]: time="2026-01-20T23:58:06.803154220Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:58:06.804895 containerd[1592]: time="2026-01-20T23:58:06.804701060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:58:06.804895 containerd[1592]: time="2026-01-20T23:58:06.804821260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:58:06.805176 kubelet[2824]: E0120 23:58:06.805079 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:58:06.805176 kubelet[2824]: E0120 23:58:06.805139 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:58:06.805775 kubelet[2824]: E0120 23:58:06.805263 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfhtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:58:06.806464 kubelet[2824]: E0120 23:58:06.806413 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:58:08.456329 kubelet[2824]: E0120 23:58:08.456267 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:58:09.458966 kubelet[2824]: E0120 23:58:09.458526 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:58:15.465073 kubelet[2824]: E0120 23:58:15.464357 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:58:16.456255 kubelet[2824]: E0120 23:58:16.456195 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:58:20.456804 kubelet[2824]: E0120 23:58:20.456465 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:58:20.456804 kubelet[2824]: E0120 23:58:20.456486 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:58:21.458329 kubelet[2824]: E0120 23:58:21.457571 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:58:22.458149 kubelet[2824]: E0120 23:58:22.458082 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:58:29.457075 kubelet[2824]: E0120 23:58:29.456829 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:58:30.457292 kubelet[2824]: E0120 23:58:30.457140 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:58:33.458000 kubelet[2824]: E0120 23:58:33.457578 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:58:33.460191 kubelet[2824]: E0120 23:58:33.460076 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:58:34.455939 kubelet[2824]: E0120 23:58:34.455548 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:58:36.455945 kubelet[2824]: E0120 23:58:36.455812 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:58:41.457110 kubelet[2824]: E0120 23:58:41.456082 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:58:44.457066 kubelet[2824]: E0120 23:58:44.456693 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:58:44.457066 kubelet[2824]: E0120 23:58:44.456944 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:58:46.456780 kubelet[2824]: E0120 23:58:46.456515 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:58:46.456780 kubelet[2824]: E0120 23:58:46.456700 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:58:51.460458 kubelet[2824]: E0120 23:58:51.459104 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:58:53.457334 kubelet[2824]: E0120 23:58:53.457260 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:58:55.461797 kubelet[2824]: E0120 23:58:55.461712 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:58:58.457323 kubelet[2824]: E0120 23:58:58.456697 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:58:58.457323 kubelet[2824]: E0120 23:58:58.457075 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:59:00.457642 kubelet[2824]: E0120 23:59:00.457586 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:59:02.457292 kubelet[2824]: E0120 23:59:02.457087 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:59:04.455733 kubelet[2824]: E0120 23:59:04.455655 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:59:06.455639 kubelet[2824]: E0120 23:59:06.455545 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:59:10.457645 kubelet[2824]: E0120 23:59:10.457230 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:59:12.455235 kubelet[2824]: E0120 23:59:12.455193 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:59:13.462642 systemd[1]: Started sshd@9-188.245.60.37:22-20.161.92.111:35094.service - OpenSSH per-connection server daemon (20.161.92.111:35094). Jan 20 23:59:13.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.60.37:22-20.161.92.111:35094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:13.467125 kernel: audit: type=1130 audit(1768953553.462:743): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.60.37:22-20.161.92.111:35094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:14.037000 audit[5127]: USER_ACCT pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.041832 sshd[5127]: Accepted publickey for core from 20.161.92.111 port 35094 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:14.044923 kernel: audit: type=1101 audit(1768953554.037:744): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.044961 kernel: audit: type=1103 audit(1768953554.041:745): pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.041000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.045899 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:14.050140 kernel: audit: type=1006 audit(1768953554.041:746): pid=5127 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 20 23:59:14.050345 kernel: audit: type=1300 audit(1768953554.041:746): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda506a10 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:14.041000 audit[5127]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda506a10 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:14.041000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:14.053223 kernel: audit: type=1327 audit(1768953554.041:746): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:14.061206 systemd-logind[1569]: New session 9 of user core. Jan 20 23:59:14.068260 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 23:59:14.071000 audit[5127]: USER_START pid=5127 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.076000 audit[5133]: CRED_ACQ pid=5133 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.079572 kernel: audit: type=1105 audit(1768953554.071:747): pid=5127 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.079677 kernel: audit: type=1103 audit(1768953554.076:748): pid=5133 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.457077 sshd[5133]: Connection closed by 20.161.92.111 port 35094 Jan 20 23:59:14.458032 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:14.460000 audit[5127]: USER_END pid=5127 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.460000 audit[5127]: CRED_DISP pid=5127 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.466933 systemd[1]: sshd@9-188.245.60.37:22-20.161.92.111:35094.service: Deactivated successfully. Jan 20 23:59:14.469049 kernel: audit: type=1106 audit(1768953554.460:749): pid=5127 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.469119 kernel: audit: type=1104 audit(1768953554.460:750): pid=5127 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:14.467674 systemd-logind[1569]: Session 9 logged out. Waiting for processes to exit. Jan 20 23:59:14.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.60.37:22-20.161.92.111:35094 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:14.473990 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 23:59:14.480733 systemd-logind[1569]: Removed session 9. Jan 20 23:59:15.456934 containerd[1592]: time="2026-01-20T23:59:15.456878360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:59:15.793974 containerd[1592]: time="2026-01-20T23:59:15.793922735Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:15.795480 containerd[1592]: time="2026-01-20T23:59:15.795324650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:59:15.795480 containerd[1592]: time="2026-01-20T23:59:15.795354330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:15.795838 kubelet[2824]: E0120 23:59:15.795774 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:59:15.797980 kubelet[2824]: E0120 23:59:15.795824 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:59:15.797980 kubelet[2824]: E0120 23:59:15.797905 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fn595,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5446b598c6-knjcl_calico-system(dddef414-cca7-4fb7-84c5-239896cb0ee3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:15.798229 containerd[1592]: time="2026-01-20T23:59:15.797562883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:59:15.800214 kubelet[2824]: E0120 23:59:15.799456 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:59:16.137424 containerd[1592]: time="2026-01-20T23:59:16.136696656Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:16.139022 containerd[1592]: time="2026-01-20T23:59:16.138940449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:59:16.139022 containerd[1592]: time="2026-01-20T23:59:16.138992649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:16.139533 kubelet[2824]: E0120 23:59:16.139482 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:59:16.139596 kubelet[2824]: E0120 23:59:16.139547 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:59:16.139712 kubelet[2824]: E0120 23:59:16.139659 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ad3fd4707e1b4e4592bbc8d326d656a6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:16.143093 containerd[1592]: time="2026-01-20T23:59:16.142928636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:59:16.484489 containerd[1592]: time="2026-01-20T23:59:16.484438011Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:16.485895 containerd[1592]: time="2026-01-20T23:59:16.485854406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:59:16.485981 containerd[1592]: time="2026-01-20T23:59:16.485947046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:16.486187 kubelet[2824]: E0120 23:59:16.486147 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:59:16.486240 kubelet[2824]: E0120 23:59:16.486199 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:59:16.486342 kubelet[2824]: E0120 23:59:16.486304 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ljss6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-9ccb8665d-km95p_calico-system(511e82cf-6210-4f23-b7d3-73c990aafdbb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:16.487703 kubelet[2824]: E0120 23:59:16.487635 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:59:17.456596 kubelet[2824]: E0120 23:59:17.455811 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:59:19.455784 containerd[1592]: time="2026-01-20T23:59:19.455700379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:59:19.574622 systemd[1]: Started sshd@10-188.245.60.37:22-20.161.92.111:35108.service - OpenSSH per-connection server daemon (20.161.92.111:35108). Jan 20 23:59:19.577224 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:59:19.577302 kernel: audit: type=1130 audit(1768953559.573:752): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.60.37:22-20.161.92.111:35108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:19.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.60.37:22-20.161.92.111:35108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:19.800980 containerd[1592]: time="2026-01-20T23:59:19.800801266Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:19.803519 containerd[1592]: time="2026-01-20T23:59:19.803424978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:59:19.803789 containerd[1592]: time="2026-01-20T23:59:19.803680337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:19.804201 kubelet[2824]: E0120 23:59:19.803951 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:59:19.804201 kubelet[2824]: E0120 23:59:19.804003 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:59:19.804201 kubelet[2824]: E0120 23:59:19.804144 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:19.806731 containerd[1592]: time="2026-01-20T23:59:19.806508408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:59:20.138431 containerd[1592]: time="2026-01-20T23:59:20.138258342Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:20.140289 containerd[1592]: time="2026-01-20T23:59:20.140214776Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:59:20.140683 containerd[1592]: time="2026-01-20T23:59:20.140257936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:20.140734 kubelet[2824]: E0120 23:59:20.140525 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:59:20.140734 kubelet[2824]: E0120 23:59:20.140571 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:59:20.141376 kubelet[2824]: E0120 23:59:20.141274 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5qkx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-5sc47_calico-system(9b72cdbf-b6bd-45ae-98ac-50d5aed18456): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:20.143259 kubelet[2824]: E0120 23:59:20.143217 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:59:20.142000 audit[5146]: USER_ACCT pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.143987 sshd[5146]: Accepted publickey for core from 20.161.92.111 port 35108 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:20.147845 sshd-session[5146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:20.146000 audit[5146]: CRED_ACQ pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.151258 kernel: audit: type=1101 audit(1768953560.142:753): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.151352 kernel: audit: type=1103 audit(1768953560.146:754): pid=5146 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.151387 kernel: audit: type=1006 audit(1768953560.146:755): pid=5146 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 20 23:59:20.146000 audit[5146]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff75bbf90 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:20.155250 kernel: audit: type=1300 audit(1768953560.146:755): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff75bbf90 a2=3 a3=0 items=0 ppid=1 pid=5146 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:20.146000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:20.156108 systemd-logind[1569]: New session 10 of user core. Jan 20 23:59:20.158057 kernel: audit: type=1327 audit(1768953560.146:755): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:20.161288 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 23:59:20.165000 audit[5146]: USER_START pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.167000 audit[5174]: CRED_ACQ pid=5174 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.172418 kernel: audit: type=1105 audit(1768953560.165:756): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.172514 kernel: audit: type=1103 audit(1768953560.167:757): pid=5174 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.561491 sshd[5174]: Connection closed by 20.161.92.111 port 35108 Jan 20 23:59:20.563697 sshd-session[5146]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:20.565000 audit[5146]: USER_END pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.565000 audit[5146]: CRED_DISP pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.574695 kernel: audit: type=1106 audit(1768953560.565:758): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.574778 kernel: audit: type=1104 audit(1768953560.565:759): pid=5146 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:20.575499 systemd[1]: sshd@10-188.245.60.37:22-20.161.92.111:35108.service: Deactivated successfully. Jan 20 23:59:20.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.60.37:22-20.161.92.111:35108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:20.580361 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 23:59:20.582947 systemd-logind[1569]: Session 10 logged out. Waiting for processes to exit. Jan 20 23:59:20.586887 systemd-logind[1569]: Removed session 10. Jan 20 23:59:24.456397 containerd[1592]: time="2026-01-20T23:59:24.456206754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:59:24.812904 containerd[1592]: time="2026-01-20T23:59:24.812654475Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:24.816225 containerd[1592]: time="2026-01-20T23:59:24.816139745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:24.816225 containerd[1592]: time="2026-01-20T23:59:24.816155225Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:59:24.816770 kubelet[2824]: E0120 23:59:24.816491 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:59:24.818680 kubelet[2824]: E0120 23:59:24.816671 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:59:24.820013 kubelet[2824]: E0120 23:59:24.819956 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wgrbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fmt6r_calico-apiserver(867a1dc3-f4d9-4cba-a9b8-47adcf051929): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:24.822015 kubelet[2824]: E0120 23:59:24.821942 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:59:25.669751 systemd[1]: Started sshd@11-188.245.60.37:22-20.161.92.111:41400.service - OpenSSH per-connection server daemon (20.161.92.111:41400). Jan 20 23:59:25.673589 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:59:25.673621 kernel: audit: type=1130 audit(1768953565.669:761): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-188.245.60.37:22-20.161.92.111:41400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:25.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-188.245.60.37:22-20.161.92.111:41400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:26.233000 audit[5186]: USER_ACCT pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.239160 sshd[5186]: Accepted publickey for core from 20.161.92.111 port 41400 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:26.244101 kernel: audit: type=1101 audit(1768953566.233:762): pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.244226 kernel: audit: type=1103 audit(1768953566.237:763): pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.237000 audit[5186]: CRED_ACQ pid=5186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.244001 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:26.246485 kernel: audit: type=1006 audit(1768953566.239:764): pid=5186 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 23:59:26.246779 kernel: audit: type=1300 audit(1768953566.239:764): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3b5bf90 a2=3 a3=0 items=0 ppid=1 pid=5186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:26.239000 audit[5186]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc3b5bf90 a2=3 a3=0 items=0 ppid=1 pid=5186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:26.239000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:26.250356 kernel: audit: type=1327 audit(1768953566.239:764): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:26.254159 systemd-logind[1569]: New session 11 of user core. Jan 20 23:59:26.266451 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 23:59:26.271000 audit[5186]: USER_START pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.277088 kernel: audit: type=1105 audit(1768953566.271:765): pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.278000 audit[5190]: CRED_ACQ pid=5190 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.283080 kernel: audit: type=1103 audit(1768953566.278:766): pid=5190 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.457663 containerd[1592]: time="2026-01-20T23:59:26.457604273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:59:26.634916 sshd[5190]: Connection closed by 20.161.92.111 port 41400 Jan 20 23:59:26.635677 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:26.638000 audit[5186]: USER_END pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.643837 systemd-logind[1569]: Session 11 logged out. Waiting for processes to exit. Jan 20 23:59:26.645848 kernel: audit: type=1106 audit(1768953566.638:767): pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.645939 kernel: audit: type=1104 audit(1768953566.639:768): pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.639000 audit[5186]: CRED_DISP pid=5186 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:26.646859 systemd[1]: sshd@11-188.245.60.37:22-20.161.92.111:41400.service: Deactivated successfully. Jan 20 23:59:26.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-188.245.60.37:22-20.161.92.111:41400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:26.654504 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 23:59:26.659565 systemd-logind[1569]: Removed session 11. Jan 20 23:59:26.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-188.245.60.37:22-20.161.92.111:41410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:26.748532 systemd[1]: Started sshd@12-188.245.60.37:22-20.161.92.111:41410.service - OpenSSH per-connection server daemon (20.161.92.111:41410). Jan 20 23:59:26.806837 containerd[1592]: time="2026-01-20T23:59:26.806185884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:26.808150 containerd[1592]: time="2026-01-20T23:59:26.808069958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:26.809243 containerd[1592]: time="2026-01-20T23:59:26.809094555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:59:26.809428 kubelet[2824]: E0120 23:59:26.809375 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:59:26.809791 kubelet[2824]: E0120 23:59:26.809441 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:59:26.809791 kubelet[2824]: E0120 23:59:26.809646 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qdtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wkbbm_calico-system(0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:26.812080 kubelet[2824]: E0120 23:59:26.811201 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:59:27.313000 audit[5203]: USER_ACCT pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:27.315806 sshd[5203]: Accepted publickey for core from 20.161.92.111 port 41410 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:27.315000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:27.315000 audit[5203]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe49639d0 a2=3 a3=0 items=0 ppid=1 pid=5203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:27.315000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:27.317237 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:27.335968 systemd-logind[1569]: New session 12 of user core. Jan 20 23:59:27.341438 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 23:59:27.350000 audit[5203]: USER_START pid=5203 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:27.352000 audit[5212]: CRED_ACQ pid=5212 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:27.769698 sshd[5212]: Connection closed by 20.161.92.111 port 41410 Jan 20 23:59:27.769264 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:27.770000 audit[5203]: USER_END pid=5203 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:27.771000 audit[5203]: CRED_DISP pid=5203 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:27.776758 systemd[1]: sshd@12-188.245.60.37:22-20.161.92.111:41410.service: Deactivated successfully. Jan 20 23:59:27.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-188.245.60.37:22-20.161.92.111:41410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:27.780750 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 23:59:27.785376 systemd-logind[1569]: Session 12 logged out. Waiting for processes to exit. Jan 20 23:59:27.787114 systemd-logind[1569]: Removed session 12. Jan 20 23:59:27.877660 systemd[1]: Started sshd@13-188.245.60.37:22-20.161.92.111:41412.service - OpenSSH per-connection server daemon (20.161.92.111:41412). Jan 20 23:59:27.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-188.245.60.37:22-20.161.92.111:41412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:28.411000 audit[5226]: USER_ACCT pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:28.412370 sshd[5226]: Accepted publickey for core from 20.161.92.111 port 41412 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:28.413000 audit[5226]: CRED_ACQ pid=5226 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:28.413000 audit[5226]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe3b2ec70 a2=3 a3=0 items=0 ppid=1 pid=5226 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:28.413000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:28.415264 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:28.425331 systemd-logind[1569]: New session 13 of user core. Jan 20 23:59:28.438343 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 23:59:28.440000 audit[5226]: USER_START pid=5226 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:28.443000 audit[5244]: CRED_ACQ pid=5244 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:28.458355 kubelet[2824]: E0120 23:59:28.458160 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:59:28.810517 sshd[5244]: Connection closed by 20.161.92.111 port 41412 Jan 20 23:59:28.810829 sshd-session[5226]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:28.813000 audit[5226]: USER_END pid=5226 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:28.814000 audit[5226]: CRED_DISP pid=5226 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:28.820783 systemd-logind[1569]: Session 13 logged out. Waiting for processes to exit. Jan 20 23:59:28.822320 systemd[1]: sshd@13-188.245.60.37:22-20.161.92.111:41412.service: Deactivated successfully. Jan 20 23:59:28.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-188.245.60.37:22-20.161.92.111:41412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:28.825845 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 23:59:28.831547 systemd-logind[1569]: Removed session 13. Jan 20 23:59:29.455516 containerd[1592]: time="2026-01-20T23:59:29.455182519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:59:29.803180 containerd[1592]: time="2026-01-20T23:59:29.803123608Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:59:29.804823 containerd[1592]: time="2026-01-20T23:59:29.804666004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:59:29.804823 containerd[1592]: time="2026-01-20T23:59:29.804764324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:59:29.807231 kubelet[2824]: E0120 23:59:29.807182 2824 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:59:29.807231 kubelet[2824]: E0120 23:59:29.807236 2824 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:59:29.807627 kubelet[2824]: E0120 23:59:29.807364 2824 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfhtz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-799c555658-fhgzx_calico-apiserver(3f49a160-e207-435e-86a4-138a2a624ffb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:59:29.808889 kubelet[2824]: E0120 23:59:29.808842 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:59:30.457096 kubelet[2824]: E0120 23:59:30.456086 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:59:31.459625 kubelet[2824]: E0120 23:59:31.459419 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:59:33.925603 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 23:59:33.925742 kernel: audit: type=1130 audit(1768953573.921:788): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-188.245.60.37:22-20.161.92.111:33328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:33.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-188.245.60.37:22-20.161.92.111:33328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:33.922361 systemd[1]: Started sshd@14-188.245.60.37:22-20.161.92.111:33328.service - OpenSSH per-connection server daemon (20.161.92.111:33328). Jan 20 23:59:34.463005 sshd[5262]: Accepted publickey for core from 20.161.92.111 port 33328 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:34.461000 audit[5262]: USER_ACCT pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.467617 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:34.470524 kernel: audit: type=1101 audit(1768953574.461:789): pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.470595 kernel: audit: type=1103 audit(1768953574.465:790): pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.465000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.474192 kernel: audit: type=1006 audit(1768953574.465:791): pid=5262 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 20 23:59:34.465000 audit[5262]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9b854a0 a2=3 a3=0 items=0 ppid=1 pid=5262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:34.478559 kernel: audit: type=1300 audit(1768953574.465:791): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9b854a0 a2=3 a3=0 items=0 ppid=1 pid=5262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:34.483418 kernel: audit: type=1327 audit(1768953574.465:791): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:34.465000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:34.483374 systemd-logind[1569]: New session 14 of user core. Jan 20 23:59:34.488277 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 23:59:34.493000 audit[5262]: USER_START pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.497000 audit[5266]: CRED_ACQ pid=5266 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.501911 kernel: audit: type=1105 audit(1768953574.493:792): pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.501980 kernel: audit: type=1103 audit(1768953574.497:793): pid=5266 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.864066 sshd[5266]: Connection closed by 20.161.92.111 port 33328 Jan 20 23:59:34.864603 sshd-session[5262]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:34.867000 audit[5262]: USER_END pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.875061 kernel: audit: type=1106 audit(1768953574.867:794): pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.867000 audit[5262]: CRED_DISP pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.877193 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 23:59:34.878375 systemd[1]: sshd@14-188.245.60.37:22-20.161.92.111:33328.service: Deactivated successfully. Jan 20 23:59:34.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-188.245.60.37:22-20.161.92.111:33328 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:34.883108 kernel: audit: type=1104 audit(1768953574.867:795): pid=5262 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:34.885144 systemd-logind[1569]: Session 14 logged out. Waiting for processes to exit. Jan 20 23:59:34.886142 systemd-logind[1569]: Removed session 14. Jan 20 23:59:37.459735 kubelet[2824]: E0120 23:59:37.459337 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:59:38.455824 kubelet[2824]: E0120 23:59:38.455625 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:59:39.973936 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:59:39.974059 kernel: audit: type=1130 audit(1768953579.970:797): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-188.245.60.37:22-20.161.92.111:33334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:39.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-188.245.60.37:22-20.161.92.111:33334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:39.971249 systemd[1]: Started sshd@15-188.245.60.37:22-20.161.92.111:33334.service - OpenSSH per-connection server daemon (20.161.92.111:33334). Jan 20 23:59:40.510000 audit[5283]: USER_ACCT pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.514586 sshd[5283]: Accepted publickey for core from 20.161.92.111 port 33334 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:40.515168 kernel: audit: type=1101 audit(1768953580.510:798): pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.515000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.517639 sshd-session[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:40.526562 kernel: audit: type=1103 audit(1768953580.515:799): pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.526680 kernel: audit: type=1006 audit(1768953580.515:800): pid=5283 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 20 23:59:40.515000 audit[5283]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8eecfb0 a2=3 a3=0 items=0 ppid=1 pid=5283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:40.532145 kernel: audit: type=1300 audit(1768953580.515:800): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc8eecfb0 a2=3 a3=0 items=0 ppid=1 pid=5283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:40.533525 kernel: audit: type=1327 audit(1768953580.515:800): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:40.515000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:40.533904 systemd-logind[1569]: New session 15 of user core. Jan 20 23:59:40.542485 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 23:59:40.548000 audit[5283]: USER_START pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.554180 kernel: audit: type=1105 audit(1768953580.548:801): pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.554324 kernel: audit: type=1103 audit(1768953580.552:802): pid=5287 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.552000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.876688 sshd[5287]: Connection closed by 20.161.92.111 port 33334 Jan 20 23:59:40.877889 sshd-session[5283]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:40.881000 audit[5283]: USER_END pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.881000 audit[5283]: CRED_DISP pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.886895 systemd[1]: sshd@15-188.245.60.37:22-20.161.92.111:33334.service: Deactivated successfully. Jan 20 23:59:40.887258 kernel: audit: type=1106 audit(1768953580.881:803): pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.887313 kernel: audit: type=1104 audit(1768953580.881:804): pid=5283 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:40.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-188.245.60.37:22-20.161.92.111:33334 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:40.891913 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 23:59:40.895751 systemd-logind[1569]: Session 15 logged out. Waiting for processes to exit. Jan 20 23:59:40.897512 systemd-logind[1569]: Removed session 15. Jan 20 23:59:42.456752 kubelet[2824]: E0120 23:59:42.456635 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:59:43.460331 kubelet[2824]: E0120 23:59:43.460199 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:59:43.463706 kubelet[2824]: E0120 23:59:43.463621 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:59:45.458839 kubelet[2824]: E0120 23:59:45.458765 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 20 23:59:45.984657 systemd[1]: Started sshd@16-188.245.60.37:22-20.161.92.111:44738.service - OpenSSH per-connection server daemon (20.161.92.111:44738). Jan 20 23:59:45.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-188.245.60.37:22-20.161.92.111:44738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:45.988129 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:59:45.988230 kernel: audit: type=1130 audit(1768953585.984:806): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-188.245.60.37:22-20.161.92.111:44738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:46.511000 audit[5301]: USER_ACCT pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.516063 kernel: audit: type=1101 audit(1768953586.511:807): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.516600 sshd[5301]: Accepted publickey for core from 20.161.92.111 port 44738 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:46.517000 audit[5301]: CRED_ACQ pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.517927 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:46.523730 kernel: audit: type=1103 audit(1768953586.517:808): pid=5301 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.523985 kernel: audit: type=1006 audit(1768953586.517:809): pid=5301 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 23:59:46.524017 kernel: audit: type=1300 audit(1768953586.517:809): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe45d2a80 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:46.517000 audit[5301]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe45d2a80 a2=3 a3=0 items=0 ppid=1 pid=5301 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:46.517000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:46.528062 kernel: audit: type=1327 audit(1768953586.517:809): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:46.529986 systemd-logind[1569]: New session 16 of user core. Jan 20 23:59:46.537306 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 23:59:46.543000 audit[5301]: USER_START pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.548947 kernel: audit: type=1105 audit(1768953586.543:810): pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.549000 audit[5305]: CRED_ACQ pid=5305 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.553067 kernel: audit: type=1103 audit(1768953586.549:811): pid=5305 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.902839 sshd[5305]: Connection closed by 20.161.92.111 port 44738 Jan 20 23:59:46.902620 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:46.904000 audit[5301]: USER_END pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.904000 audit[5301]: CRED_DISP pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.912643 kernel: audit: type=1106 audit(1768953586.904:812): pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.912742 kernel: audit: type=1104 audit(1768953586.904:813): pid=5301 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:46.914088 systemd[1]: sshd@16-188.245.60.37:22-20.161.92.111:44738.service: Deactivated successfully. Jan 20 23:59:46.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-188.245.60.37:22-20.161.92.111:44738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:46.917976 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 23:59:46.920404 systemd-logind[1569]: Session 16 logged out. Waiting for processes to exit. Jan 20 23:59:46.925302 systemd-logind[1569]: Removed session 16. Jan 20 23:59:49.455828 kubelet[2824]: E0120 23:59:49.455076 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 20 23:59:52.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-188.245.60.37:22-20.161.92.111:44746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:52.015087 systemd[1]: Started sshd@17-188.245.60.37:22-20.161.92.111:44746.service - OpenSSH per-connection server daemon (20.161.92.111:44746). Jan 20 23:59:52.017702 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:59:52.017900 kernel: audit: type=1130 audit(1768953592.015:815): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-188.245.60.37:22-20.161.92.111:44746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:52.457579 kubelet[2824]: E0120 23:59:52.456500 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 20 23:59:52.588000 audit[5340]: USER_ACCT pid=5340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.589063 sshd[5340]: Accepted publickey for core from 20.161.92.111 port 44746 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:52.592098 kernel: audit: type=1101 audit(1768953592.588:816): pid=5340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.592000 audit[5340]: CRED_ACQ pid=5340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.595467 kernel: audit: type=1103 audit(1768953592.592:817): pid=5340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.595543 kernel: audit: type=1006 audit(1768953592.592:818): pid=5340 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jan 20 23:59:52.595427 sshd-session[5340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:52.592000 audit[5340]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0ba6b30 a2=3 a3=0 items=0 ppid=1 pid=5340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:52.598445 kernel: audit: type=1300 audit(1768953592.592:818): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd0ba6b30 a2=3 a3=0 items=0 ppid=1 pid=5340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:52.599388 kernel: audit: type=1327 audit(1768953592.592:818): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:52.592000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:52.606122 systemd-logind[1569]: New session 17 of user core. Jan 20 23:59:52.610600 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 23:59:52.616000 audit[5340]: USER_START pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.619083 kernel: audit: type=1105 audit(1768953592.616:819): pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.619000 audit[5344]: CRED_ACQ pid=5344 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:52.622089 kernel: audit: type=1103 audit(1768953592.619:820): pid=5344 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.001523 sshd[5344]: Connection closed by 20.161.92.111 port 44746 Jan 20 23:59:53.002455 sshd-session[5340]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:53.004000 audit[5340]: USER_END pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.004000 audit[5340]: CRED_DISP pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.009737 kernel: audit: type=1106 audit(1768953593.004:821): pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.009863 kernel: audit: type=1104 audit(1768953593.004:822): pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-188.245.60.37:22-20.161.92.111:44746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:53.010520 systemd[1]: sshd@17-188.245.60.37:22-20.161.92.111:44746.service: Deactivated successfully. Jan 20 23:59:53.014458 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 23:59:53.017137 systemd-logind[1569]: Session 17 logged out. Waiting for processes to exit. Jan 20 23:59:53.019955 systemd-logind[1569]: Removed session 17. Jan 20 23:59:53.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-188.245.60.37:22-20.161.92.111:49762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:53.118448 systemd[1]: Started sshd@18-188.245.60.37:22-20.161.92.111:49762.service - OpenSSH per-connection server daemon (20.161.92.111:49762). Jan 20 23:59:53.669000 audit[5357]: USER_ACCT pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.670167 sshd[5357]: Accepted publickey for core from 20.161.92.111 port 49762 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:53.672000 audit[5357]: CRED_ACQ pid=5357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.673000 audit[5357]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf0d1a50 a2=3 a3=0 items=0 ppid=1 pid=5357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:53.673000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:53.675724 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:53.684843 systemd-logind[1569]: New session 18 of user core. Jan 20 23:59:53.691291 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 23:59:53.696000 audit[5357]: USER_START pid=5357 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:53.701000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:54.250580 sshd[5362]: Connection closed by 20.161.92.111 port 49762 Jan 20 23:59:54.251209 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:54.252000 audit[5357]: USER_END pid=5357 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:54.253000 audit[5357]: CRED_DISP pid=5357 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:54.257006 systemd-logind[1569]: Session 18 logged out. Waiting for processes to exit. Jan 20 23:59:54.259676 systemd[1]: sshd@18-188.245.60.37:22-20.161.92.111:49762.service: Deactivated successfully. Jan 20 23:59:54.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-188.245.60.37:22-20.161.92.111:49762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:54.267190 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 23:59:54.272200 systemd-logind[1569]: Removed session 18. Jan 20 23:59:54.358008 systemd[1]: Started sshd@19-188.245.60.37:22-20.161.92.111:49774.service - OpenSSH per-connection server daemon (20.161.92.111:49774). Jan 20 23:59:54.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-188.245.60.37:22-20.161.92.111:49774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:54.456006 kubelet[2824]: E0120 23:59:54.455654 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 20 23:59:54.906000 audit[5372]: USER_ACCT pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:54.907560 sshd[5372]: Accepted publickey for core from 20.161.92.111 port 49774 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:54.910000 audit[5372]: CRED_ACQ pid=5372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:54.910000 audit[5372]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc4ef16a0 a2=3 a3=0 items=0 ppid=1 pid=5372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:54.910000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:54.913011 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:54.922026 systemd-logind[1569]: New session 19 of user core. Jan 20 23:59:54.926535 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 23:59:54.932000 audit[5372]: USER_START pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:54.936000 audit[5376]: CRED_ACQ pid=5376 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:55.914817 sshd[5376]: Connection closed by 20.161.92.111 port 49774 Jan 20 23:59:55.915236 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:55.918000 audit[5372]: USER_END pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:55.918000 audit[5372]: CRED_DISP pid=5372 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:55.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-188.245.60.37:22-20.161.92.111:49774 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:55.924188 systemd-logind[1569]: Session 19 logged out. Waiting for processes to exit. Jan 20 23:59:55.924200 systemd[1]: sshd@19-188.245.60.37:22-20.161.92.111:49774.service: Deactivated successfully. Jan 20 23:59:55.929013 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 23:59:55.934957 systemd-logind[1569]: Removed session 19. Jan 20 23:59:55.940000 audit[5386]: NETFILTER_CFG table=filter:148 family=2 entries=26 op=nft_register_rule pid=5386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:59:55.940000 audit[5386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd7c34100 a2=0 a3=1 items=0 ppid=2925 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:55.940000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:59:55.951000 audit[5386]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:59:55.951000 audit[5386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd7c34100 a2=0 a3=1 items=0 ppid=2925 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:55.951000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:59:55.968000 audit[5391]: NETFILTER_CFG table=filter:150 family=2 entries=38 op=nft_register_rule pid=5391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:59:55.968000 audit[5391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffcf9d96d0 a2=0 a3=1 items=0 ppid=2925 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:55.968000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:59:55.980000 audit[5391]: NETFILTER_CFG table=nat:151 family=2 entries=20 op=nft_register_rule pid=5391 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:59:55.980000 audit[5391]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcf9d96d0 a2=0 a3=1 items=0 ppid=2925 pid=5391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:55.980000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:59:56.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-188.245.60.37:22-20.161.92.111:49778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:56.025479 systemd[1]: Started sshd@20-188.245.60.37:22-20.161.92.111:49778.service - OpenSSH per-connection server daemon (20.161.92.111:49778). Jan 20 23:59:56.457761 kubelet[2824]: E0120 23:59:56.457701 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 20 23:59:56.581000 audit[5393]: USER_ACCT pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:56.582263 sshd[5393]: Accepted publickey for core from 20.161.92.111 port 49778 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:56.582000 audit[5393]: CRED_ACQ pid=5393 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:56.582000 audit[5393]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef876a00 a2=3 a3=0 items=0 ppid=1 pid=5393 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:56.582000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:56.583716 sshd-session[5393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:56.593927 systemd-logind[1569]: New session 20 of user core. Jan 20 23:59:56.599375 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 23:59:56.603000 audit[5393]: USER_START pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:56.605000 audit[5397]: CRED_ACQ pid=5397 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.163117 sshd[5397]: Connection closed by 20.161.92.111 port 49778 Jan 20 23:59:57.164383 sshd-session[5393]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:57.172109 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 20 23:59:57.172260 kernel: audit: type=1106 audit(1768953597.167:852): pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.167000 audit[5393]: USER_END pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.170000 audit[5393]: CRED_DISP pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.173247 systemd[1]: sshd@20-188.245.60.37:22-20.161.92.111:49778.service: Deactivated successfully. Jan 20 23:59:57.175365 kernel: audit: type=1104 audit(1768953597.170:853): pid=5393 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-188.245.60.37:22-20.161.92.111:49778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:57.177922 kernel: audit: type=1131 audit(1768953597.173:854): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-188.245.60.37:22-20.161.92.111:49778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:57.180409 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 23:59:57.182153 systemd-logind[1569]: Session 20 logged out. Waiting for processes to exit. Jan 20 23:59:57.186312 systemd-logind[1569]: Removed session 20. Jan 20 23:59:57.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-188.245.60.37:22-20.161.92.111:49792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:57.272690 systemd[1]: Started sshd@21-188.245.60.37:22-20.161.92.111:49792.service - OpenSSH per-connection server daemon (20.161.92.111:49792). Jan 20 23:59:57.278119 kernel: audit: type=1130 audit(1768953597.272:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-188.245.60.37:22-20.161.92.111:49792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:57.833000 audit[5409]: USER_ACCT pid=5409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.836910 sshd[5409]: Accepted publickey for core from 20.161.92.111 port 49792 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 20 23:59:57.839000 audit[5409]: CRED_ACQ pid=5409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.842858 kernel: audit: type=1101 audit(1768953597.833:856): pid=5409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.842968 kernel: audit: type=1103 audit(1768953597.839:857): pid=5409 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.843020 sshd-session[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:59:57.845620 kernel: audit: type=1006 audit(1768953597.840:858): pid=5409 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 23:59:57.840000 audit[5409]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb48e2e0 a2=3 a3=0 items=0 ppid=1 pid=5409 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:57.848109 kernel: audit: type=1300 audit(1768953597.840:858): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcb48e2e0 a2=3 a3=0 items=0 ppid=1 pid=5409 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:59:57.848228 kernel: audit: type=1327 audit(1768953597.840:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:57.840000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:59:57.856411 systemd-logind[1569]: New session 21 of user core. Jan 20 23:59:57.862334 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 23:59:57.868000 audit[5409]: USER_START pid=5409 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.873130 kernel: audit: type=1105 audit(1768953597.868:859): pid=5409 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:57.872000 audit[5413]: CRED_ACQ pid=5413 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:58.237058 sshd[5413]: Connection closed by 20.161.92.111 port 49792 Jan 20 23:59:58.237464 sshd-session[5409]: pam_unix(sshd:session): session closed for user core Jan 20 23:59:58.240000 audit[5409]: USER_END pid=5409 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:58.240000 audit[5409]: CRED_DISP pid=5409 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:59:58.246798 systemd[1]: sshd@21-188.245.60.37:22-20.161.92.111:49792.service: Deactivated successfully. Jan 20 23:59:58.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-188.245.60.37:22-20.161.92.111:49792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:59:58.250729 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 23:59:58.252405 systemd-logind[1569]: Session 21 logged out. Waiting for processes to exit. Jan 20 23:59:58.255692 systemd-logind[1569]: Removed session 21. Jan 20 23:59:58.457730 kubelet[2824]: E0120 23:59:58.457666 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 20 23:59:59.461012 kubelet[2824]: E0120 23:59:59.460929 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 21 00:00:00.456596 kubelet[2824]: E0121 00:00:00.456522 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 21 00:00:03.367208 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 21 00:00:03.367357 kernel: audit: type=1130 audit(1768953603.364:864): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-188.245.60.37:22-20.161.92.111:40070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:03.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-188.245.60.37:22-20.161.92.111:40070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:03.364460 systemd[1]: Started sshd@22-188.245.60.37:22-20.161.92.111:40070.service - OpenSSH per-connection server daemon (20.161.92.111:40070). Jan 21 00:00:03.454980 kubelet[2824]: E0121 00:00:03.454914 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 21 00:00:03.951791 sshd[5425]: Accepted publickey for core from 20.161.92.111 port 40070 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 21 00:00:03.951000 audit[5425]: USER_ACCT pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.956096 kernel: audit: type=1101 audit(1768953603.951:865): pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.956000 audit[5425]: CRED_ACQ pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.959284 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:00:03.960787 kernel: audit: type=1103 audit(1768953603.956:866): pid=5425 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.960885 kernel: audit: type=1006 audit(1768953603.958:867): pid=5425 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 21 00:00:03.958000 audit[5425]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0cd3330 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:03.964268 kernel: audit: type=1300 audit(1768953603.958:867): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc0cd3330 a2=3 a3=0 items=0 ppid=1 pid=5425 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:03.958000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:03.965327 kernel: audit: type=1327 audit(1768953603.958:867): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:03.968869 systemd-logind[1569]: New session 22 of user core. Jan 21 00:00:03.973320 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 21 00:00:03.979000 audit[5425]: USER_START pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.983103 kernel: audit: type=1105 audit(1768953603.979:868): pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.984000 audit[5429]: CRED_ACQ pid=5429 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:03.987100 kernel: audit: type=1103 audit(1768953603.984:869): pid=5429 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:04.395941 sshd[5429]: Connection closed by 20.161.92.111 port 40070 Jan 21 00:00:04.398308 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Jan 21 00:00:04.400000 audit[5425]: USER_END pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:04.405799 systemd[1]: sshd@22-188.245.60.37:22-20.161.92.111:40070.service: Deactivated successfully. Jan 21 00:00:04.400000 audit[5425]: CRED_DISP pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:04.410381 systemd[1]: session-22.scope: Deactivated successfully. Jan 21 00:00:04.412081 kernel: audit: type=1106 audit(1768953604.400:870): pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:04.412184 kernel: audit: type=1104 audit(1768953604.400:871): pid=5425 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:04.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-188.245.60.37:22-20.161.92.111:40070 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:04.418779 systemd-logind[1569]: Session 22 logged out. Waiting for processes to exit. Jan 21 00:00:04.421751 systemd-logind[1569]: Removed session 22. Jan 21 00:00:05.324000 audit[5441]: NETFILTER_CFG table=filter:152 family=2 entries=26 op=nft_register_rule pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:00:05.324000 audit[5441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffffeb5c20 a2=0 a3=1 items=0 ppid=2925 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:05.324000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:00:05.328000 audit[5441]: NETFILTER_CFG table=nat:153 family=2 entries=104 op=nft_register_chain pid=5441 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 21 00:00:05.328000 audit[5441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffffeb5c20 a2=0 a3=1 items=0 ppid=2925 pid=5441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:05.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 21 00:00:08.455408 kubelet[2824]: E0121 00:00:08.455355 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 21 00:00:09.456204 kubelet[2824]: E0121 00:00:09.456145 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 21 00:00:09.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-188.245.60.37:22-20.161.92.111:40076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:09.510318 systemd[1]: Started sshd@23-188.245.60.37:22-20.161.92.111:40076.service - OpenSSH per-connection server daemon (20.161.92.111:40076). Jan 21 00:00:09.512884 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 21 00:00:09.512977 kernel: audit: type=1130 audit(1768953609.509:875): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-188.245.60.37:22-20.161.92.111:40076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:10.091000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.094330 sshd[5443]: Accepted publickey for core from 20.161.92.111 port 40076 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 21 00:00:10.099011 kernel: audit: type=1101 audit(1768953610.091:876): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.099166 kernel: audit: type=1103 audit(1768953610.095:877): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.099190 kernel: audit: type=1006 audit(1768953610.095:878): pid=5443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 21 00:00:10.095000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.097562 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:00:10.095000 audit[5443]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff072f00 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:10.103565 kernel: audit: type=1300 audit(1768953610.095:878): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff072f00 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:10.103684 kernel: audit: type=1327 audit(1768953610.095:878): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:10.095000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:10.109746 systemd-logind[1569]: New session 23 of user core. Jan 21 00:00:10.116425 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 21 00:00:10.121000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.126188 kernel: audit: type=1105 audit(1768953610.121:879): pid=5443 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.125000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.129115 kernel: audit: type=1103 audit(1768953610.125:880): pid=5447 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.455437 kubelet[2824]: E0121 00:00:10.454900 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 21 00:00:10.548077 sshd[5447]: Connection closed by 20.161.92.111 port 40076 Jan 21 00:00:10.548759 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Jan 21 00:00:10.550000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.556135 systemd[1]: sshd@23-188.245.60.37:22-20.161.92.111:40076.service: Deactivated successfully. Jan 21 00:00:10.551000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.559798 kernel: audit: type=1106 audit(1768953610.550:881): pid=5443 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.559911 kernel: audit: type=1104 audit(1768953610.551:882): pid=5443 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:10.562000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-188.245.60.37:22-20.161.92.111:40076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:10.566135 systemd[1]: session-23.scope: Deactivated successfully. Jan 21 00:00:10.570703 systemd-logind[1569]: Session 23 logged out. Waiting for processes to exit. Jan 21 00:00:10.573771 systemd-logind[1569]: Removed session 23. Jan 21 00:00:12.456172 kubelet[2824]: E0121 00:00:12.455697 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 21 00:00:13.456195 kubelet[2824]: E0121 00:00:13.455558 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 21 00:00:14.454672 kubelet[2824]: E0121 00:00:14.454617 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 21 00:00:15.646662 systemd[1]: Started sshd@24-188.245.60.37:22-20.161.92.111:37816.service - OpenSSH per-connection server daemon (20.161.92.111:37816). Jan 21 00:00:15.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-188.245.60.37:22-20.161.92.111:37816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:15.650059 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 00:00:15.650165 kernel: audit: type=1130 audit(1768953615.645:884): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-188.245.60.37:22-20.161.92.111:37816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:16.200000 audit[5461]: USER_ACCT pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.203700 sshd[5461]: Accepted publickey for core from 20.161.92.111 port 37816 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 21 00:00:16.205077 kernel: audit: type=1101 audit(1768953616.200:885): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.206807 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:00:16.204000 audit[5461]: CRED_ACQ pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.211105 kernel: audit: type=1103 audit(1768953616.204:886): pid=5461 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.211206 kernel: audit: type=1006 audit(1768953616.205:887): pid=5461 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 21 00:00:16.211244 kernel: audit: type=1300 audit(1768953616.205:887): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeddf0ac0 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:16.205000 audit[5461]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffeddf0ac0 a2=3 a3=0 items=0 ppid=1 pid=5461 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:16.205000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:16.217190 kernel: audit: type=1327 audit(1768953616.205:887): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:16.219323 systemd-logind[1569]: New session 24 of user core. Jan 21 00:00:16.226302 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 21 00:00:16.231000 audit[5461]: USER_START pid=5461 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.234000 audit[5465]: CRED_ACQ pid=5465 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.238623 kernel: audit: type=1105 audit(1768953616.231:888): pid=5461 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.238730 kernel: audit: type=1103 audit(1768953616.234:889): pid=5465 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.575334 sshd[5465]: Connection closed by 20.161.92.111 port 37816 Jan 21 00:00:16.576461 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Jan 21 00:00:16.578000 audit[5461]: USER_END pid=5461 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.578000 audit[5461]: CRED_DISP pid=5461 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.585245 systemd[1]: sshd@24-188.245.60.37:22-20.161.92.111:37816.service: Deactivated successfully. Jan 21 00:00:16.586287 kernel: audit: type=1106 audit(1768953616.578:890): pid=5461 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.586366 kernel: audit: type=1104 audit(1768953616.578:891): pid=5461 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:16.585000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-188.245.60.37:22-20.161.92.111:37816 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:16.590937 systemd[1]: session-24.scope: Deactivated successfully. Jan 21 00:00:16.593927 systemd-logind[1569]: Session 24 logged out. Waiting for processes to exit. Jan 21 00:00:16.595265 systemd-logind[1569]: Removed session 24. Jan 21 00:00:21.702794 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 00:00:21.702902 kernel: audit: type=1130 audit(1768953621.699:893): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-188.245.60.37:22-20.161.92.111:37820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:21.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-188.245.60.37:22-20.161.92.111:37820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:21.699995 systemd[1]: Started sshd@25-188.245.60.37:22-20.161.92.111:37820.service - OpenSSH per-connection server daemon (20.161.92.111:37820). Jan 21 00:00:22.280000 audit[5500]: USER_ACCT pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.283204 sshd[5500]: Accepted publickey for core from 20.161.92.111 port 37820 ssh2: RSA SHA256:cvxf112fx+h3M2m6mkRPApI2MZ9XHlKVIwD7ZYvxNsY Jan 21 00:00:22.283000 audit[5500]: CRED_ACQ pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.285765 sshd-session[5500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 21 00:00:22.289830 kernel: audit: type=1101 audit(1768953622.280:894): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.289950 kernel: audit: type=1103 audit(1768953622.283:895): pid=5500 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.293685 kernel: audit: type=1006 audit(1768953622.283:896): pid=5500 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jan 21 00:00:22.283000 audit[5500]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd686dee0 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:22.296065 kernel: audit: type=1300 audit(1768953622.283:896): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd686dee0 a2=3 a3=0 items=0 ppid=1 pid=5500 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:22.283000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:22.297152 kernel: audit: type=1327 audit(1768953622.283:896): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 21 00:00:22.299327 systemd-logind[1569]: New session 25 of user core. Jan 21 00:00:22.305402 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 21 00:00:22.309000 audit[5500]: USER_START pid=5500 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.312000 audit[5504]: CRED_ACQ pid=5504 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.315520 kernel: audit: type=1105 audit(1768953622.309:897): pid=5500 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.315598 kernel: audit: type=1103 audit(1768953622.312:898): pid=5504 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.456342 kubelet[2824]: E0121 00:00:22.455908 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 21 00:00:22.702582 sshd[5504]: Connection closed by 20.161.92.111 port 37820 Jan 21 00:00:22.702339 sshd-session[5500]: pam_unix(sshd:session): session closed for user core Jan 21 00:00:22.703000 audit[5500]: USER_END pid=5500 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.709605 systemd[1]: sshd@25-188.245.60.37:22-20.161.92.111:37820.service: Deactivated successfully. Jan 21 00:00:22.704000 audit[5500]: CRED_DISP pid=5500 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.713305 kernel: audit: type=1106 audit(1768953622.703:899): pid=5500 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.713419 kernel: audit: type=1104 audit(1768953622.704:900): pid=5500 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 21 00:00:22.717333 systemd[1]: session-25.scope: Deactivated successfully. Jan 21 00:00:22.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-188.245.60.37:22-20.161.92.111:37820 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 21 00:00:22.721652 systemd-logind[1569]: Session 25 logged out. Waiting for processes to exit. Jan 21 00:00:22.726762 systemd-logind[1569]: Removed session 25. Jan 21 00:00:23.458015 kubelet[2824]: E0121 00:00:23.457287 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 21 00:00:24.456446 kubelet[2824]: E0121 00:00:24.456379 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 21 00:00:27.458377 kubelet[2824]: E0121 00:00:27.457457 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 21 00:00:27.460606 kubelet[2824]: E0121 00:00:27.460532 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb" Jan 21 00:00:28.455636 kubelet[2824]: E0121 00:00:28.455570 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 21 00:00:30.335925 containerd[1592]: time="2026-01-21T00:00:30.335756263Z" level=info msg="container event discarded" container=af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba type=CONTAINER_CREATED_EVENT Jan 21 00:00:30.347241 containerd[1592]: time="2026-01-21T00:00:30.347165968Z" level=info msg="container event discarded" container=af303a766d0638eff194b20ca34d31f3feb7eb40fdb291834c3d4151df6fa8ba type=CONTAINER_STARTED_EVENT Jan 21 00:00:30.383673 containerd[1592]: time="2026-01-21T00:00:30.383554518Z" level=info msg="container event discarded" container=a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50 type=CONTAINER_CREATED_EVENT Jan 21 00:00:30.383673 containerd[1592]: time="2026-01-21T00:00:30.383608358Z" level=info msg="container event discarded" container=a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50 type=CONTAINER_STARTED_EVENT Jan 21 00:00:30.383673 containerd[1592]: time="2026-01-21T00:00:30.383628598Z" level=info msg="container event discarded" container=a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6 type=CONTAINER_CREATED_EVENT Jan 21 00:00:30.423079 containerd[1592]: time="2026-01-21T00:00:30.422969904Z" level=info msg="container event discarded" container=e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a type=CONTAINER_CREATED_EVENT Jan 21 00:00:30.423079 containerd[1592]: time="2026-01-21T00:00:30.423072464Z" level=info msg="container event discarded" container=e6723d56506b88220ae21abc1589e658cd2fd56617f754a8f7f40f8aacbc723a type=CONTAINER_STARTED_EVENT Jan 21 00:00:30.423261 containerd[1592]: time="2026-01-21T00:00:30.423097224Z" level=info msg="container event discarded" container=256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900 type=CONTAINER_CREATED_EVENT Jan 21 00:00:30.466298 containerd[1592]: time="2026-01-21T00:00:30.466191845Z" level=info msg="container event discarded" container=d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b type=CONTAINER_CREATED_EVENT Jan 21 00:00:30.481539 containerd[1592]: time="2026-01-21T00:00:30.481454985Z" level=info msg="container event discarded" container=a69678a59f36a54faeeed6a67eba4af947fb13a0767ad1d5820e615a118f7aa6 type=CONTAINER_STARTED_EVENT Jan 21 00:00:30.552020 containerd[1592]: time="2026-01-21T00:00:30.551925929Z" level=info msg="container event discarded" container=256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900 type=CONTAINER_STARTED_EVENT Jan 21 00:00:30.597672 containerd[1592]: time="2026-01-21T00:00:30.597314747Z" level=info msg="container event discarded" container=d15cd4298cf542e847756e6a9e0a9556e48c6299d4974e2de31bc6861c9b381b type=CONTAINER_STARTED_EVENT Jan 21 00:00:33.455442 kubelet[2824]: E0121 00:00:33.455343 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5446b598c6-knjcl" podUID="dddef414-cca7-4fb7-84c5-239896cb0ee3" Jan 21 00:00:36.767664 systemd[1]: cri-containerd-256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900.scope: Deactivated successfully. Jan 21 00:00:36.769871 systemd[1]: cri-containerd-256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900.scope: Consumed 5.440s CPU time, 64.8M memory peak, 2.7M read from disk. Jan 21 00:00:36.772695 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 21 00:00:36.773740 kernel: audit: type=1334 audit(1768953636.770:902): prog-id=254 op=LOAD Jan 21 00:00:36.774026 kernel: audit: type=1334 audit(1768953636.771:903): prog-id=86 op=UNLOAD Jan 21 00:00:36.775600 kernel: audit: type=1334 audit(1768953636.773:904): prog-id=101 op=UNLOAD Jan 21 00:00:36.775668 kernel: audit: type=1334 audit(1768953636.773:905): prog-id=105 op=UNLOAD Jan 21 00:00:36.770000 audit: BPF prog-id=254 op=LOAD Jan 21 00:00:36.771000 audit: BPF prog-id=86 op=UNLOAD Jan 21 00:00:36.773000 audit: BPF prog-id=101 op=UNLOAD Jan 21 00:00:36.773000 audit: BPF prog-id=105 op=UNLOAD Jan 21 00:00:36.776078 containerd[1592]: time="2026-01-21T00:00:36.770026308Z" level=info msg="received container exit event container_id:\"256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900\" id:\"256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900\" pid:2673 exit_status:1 exited_at:{seconds:1768953636 nanos:769550668}" Jan 21 00:00:36.804155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900-rootfs.mount: Deactivated successfully. Jan 21 00:00:36.805886 systemd[1784]: Created slice background.slice - User Background Tasks Slice. Jan 21 00:00:36.807936 systemd[1784]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 21 00:00:36.834465 systemd[1784]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 21 00:00:37.178305 kubelet[2824]: E0121 00:00:37.177313 2824 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60230->10.0.0.2:2379: read: connection timed out" Jan 21 00:00:37.456031 kubelet[2824]: E0121 00:00:37.455864 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-5sc47" podUID="9b72cdbf-b6bd-45ae-98ac-50d5aed18456" Jan 21 00:00:37.485767 kubelet[2824]: I0121 00:00:37.484757 2824 scope.go:117] "RemoveContainer" containerID="256e99c835973075e3e8f84224ca19de86588725e67d3f65854172ee1a4f0900" Jan 21 00:00:37.497203 containerd[1592]: time="2026-01-21T00:00:37.497146419Z" level=info msg="CreateContainer within sandbox \"a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 21 00:00:37.513298 containerd[1592]: time="2026-01-21T00:00:37.513135558Z" level=info msg="Container 6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:00:37.524089 containerd[1592]: time="2026-01-21T00:00:37.524022784Z" level=info msg="CreateContainer within sandbox \"a84d8eeb80479fd81d0d469d81808d65bcde8b6406448f4a52fcc0c4e39e2a50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987\"" Jan 21 00:00:37.526261 containerd[1592]: time="2026-01-21T00:00:37.526223181Z" level=info msg="StartContainer for \"6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987\"" Jan 21 00:00:37.527940 containerd[1592]: time="2026-01-21T00:00:37.527910579Z" level=info msg="connecting to shim 6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987" address="unix:///run/containerd/s/0a392a4317137361e395446fd0f298e2101737f352eec6b22d34ab79e174a078" protocol=ttrpc version=3 Jan 21 00:00:37.558488 systemd[1]: Started cri-containerd-6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987.scope - libcontainer container 6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987. Jan 21 00:00:37.574000 audit: BPF prog-id=255 op=LOAD Jan 21 00:00:37.577135 kernel: audit: type=1334 audit(1768953637.574:906): prog-id=255 op=LOAD Jan 21 00:00:37.576000 audit: BPF prog-id=256 op=LOAD Jan 21 00:00:37.576000 audit[5533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.582057 kernel: audit: type=1334 audit(1768953637.576:907): prog-id=256 op=LOAD Jan 21 00:00:37.582177 kernel: audit: type=1300 audit(1768953637.576:907): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.584895 kernel: audit: type=1327 audit(1768953637.576:907): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.584979 kernel: audit: type=1334 audit(1768953637.577:908): prog-id=256 op=UNLOAD Jan 21 00:00:37.577000 audit: BPF prog-id=256 op=UNLOAD Jan 21 00:00:37.577000 audit[5533]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.587351 kernel: audit: type=1300 audit(1768953637.577:908): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.577000 audit: BPF prog-id=257 op=LOAD Jan 21 00:00:37.577000 audit[5533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.578000 audit: BPF prog-id=258 op=LOAD Jan 21 00:00:37.578000 audit[5533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.578000 audit: BPF prog-id=258 op=UNLOAD Jan 21 00:00:37.578000 audit[5533]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.578000 audit: BPF prog-id=257 op=UNLOAD Jan 21 00:00:37.578000 audit[5533]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.578000 audit: BPF prog-id=259 op=LOAD Jan 21 00:00:37.578000 audit[5533]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2537 pid=5533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:37.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664346133306364316639663631613637336433646464363339656165 Jan 21 00:00:37.627459 containerd[1592]: time="2026-01-21T00:00:37.627223373Z" level=info msg="StartContainer for \"6d4a30cd1f9f61a673d3ddd639eaeff7856534c658c74a972e5ca46b86e5b987\" returns successfully" Jan 21 00:00:37.944441 kubelet[2824]: E0121 00:00:37.944317 2824 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:32966->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{goldmane-666569f655-wkbbm.188c95bb2581553d calico-system 1902 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-wkbbm,UID:0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0,APIVersion:v1,ResourceVersion:785,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-f640cc67e1,},FirstTimestamp:2026-01-20 23:56:24 +0000 UTC,LastTimestamp:2026-01-21 00:00:27.457341169 +0000 UTC m=+292.127094584,Count:16,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-f640cc67e1,}" Jan 21 00:00:38.399140 systemd[1]: cri-containerd-1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9.scope: Deactivated successfully. Jan 21 00:00:38.399804 systemd[1]: cri-containerd-1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9.scope: Consumed 54.999s CPU time, 106.7M memory peak. Jan 21 00:00:38.403000 audit: BPF prog-id=144 op=UNLOAD Jan 21 00:00:38.403000 audit: BPF prog-id=148 op=UNLOAD Jan 21 00:00:38.404776 containerd[1592]: time="2026-01-21T00:00:38.404434187Z" level=info msg="received container exit event container_id:\"1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9\" id:\"1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9\" pid:3144 exit_status:1 exited_at:{seconds:1768953638 nanos:400304832}" Jan 21 00:00:38.444984 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9-rootfs.mount: Deactivated successfully. Jan 21 00:00:38.456024 kubelet[2824]: E0121 00:00:38.455919 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wkbbm" podUID="0ab4b60b-a35a-4fcf-b5ca-07bcd71284a0" Jan 21 00:00:38.456764 kubelet[2824]: E0121 00:00:38.456503 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fhgzx" podUID="3f49a160-e207-435e-86a4-138a2a624ffb" Jan 21 00:00:38.495756 kubelet[2824]: I0121 00:00:38.495191 2824 scope.go:117] "RemoveContainer" containerID="1429419852b67cc21cc80b3a2ece402be49f748fa2283961f7edd230c96ac4d9" Jan 21 00:00:38.498078 containerd[1592]: time="2026-01-21T00:00:38.498017589Z" level=info msg="CreateContainer within sandbox \"407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 21 00:00:38.519062 containerd[1592]: time="2026-01-21T00:00:38.517229365Z" level=info msg="Container d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a: CDI devices from CRI Config.CDIDevices: []" Jan 21 00:00:38.526803 containerd[1592]: time="2026-01-21T00:00:38.526761673Z" level=info msg="CreateContainer within sandbox \"407f87f6cadefee6169b756cfd65636b0c41c720d32cda20dfcdfb0096487734\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a\"" Jan 21 00:00:38.527465 containerd[1592]: time="2026-01-21T00:00:38.527429072Z" level=info msg="StartContainer for \"d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a\"" Jan 21 00:00:38.528333 containerd[1592]: time="2026-01-21T00:00:38.528307151Z" level=info msg="connecting to shim d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a" address="unix:///run/containerd/s/736c5e97012321b6658e67214af4a828d2f8738f592600cac6fa600382173113" protocol=ttrpc version=3 Jan 21 00:00:38.560436 systemd[1]: Started cri-containerd-d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a.scope - libcontainer container d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a. Jan 21 00:00:38.578000 audit: BPF prog-id=260 op=LOAD Jan 21 00:00:38.578000 audit: BPF prog-id=261 op=LOAD Jan 21 00:00:38.578000 audit[5574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.579000 audit: BPF prog-id=261 op=UNLOAD Jan 21 00:00:38.579000 audit[5574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.579000 audit: BPF prog-id=262 op=LOAD Jan 21 00:00:38.579000 audit[5574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.579000 audit: BPF prog-id=263 op=LOAD Jan 21 00:00:38.579000 audit[5574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.580000 audit: BPF prog-id=263 op=UNLOAD Jan 21 00:00:38.580000 audit[5574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.580000 audit: BPF prog-id=262 op=UNLOAD Jan 21 00:00:38.580000 audit[5574]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.580000 audit: BPF prog-id=264 op=LOAD Jan 21 00:00:38.580000 audit[5574]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2994 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 21 00:00:38.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437396366663831333166316161306263336230313832303466623330 Jan 21 00:00:38.624680 containerd[1592]: time="2026-01-21T00:00:38.624635949Z" level=info msg="StartContainer for \"d79cff8131f1aa0bc3b018204fb3035c7192f188f531965e3b06ad88bcc94b4a\" returns successfully" Jan 21 00:00:39.352808 kubelet[2824]: I0121 00:00:39.352702 2824 status_manager.go:890] "Failed to get status for pod" podUID="ee8055c904db73e56bf50c1df7a9426e" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-f640cc67e1" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:60150->10.0.0.2:2379: read: connection timed out" Jan 21 00:00:40.457207 kubelet[2824]: E0121 00:00:40.456562 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-799c555658-fmt6r" podUID="867a1dc3-f4d9-4cba-a9b8-47adcf051929" Jan 21 00:00:40.458746 kubelet[2824]: E0121 00:00:40.458702 2824 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ccb8665d-km95p" podUID="511e82cf-6210-4f23-b7d3-73c990aafdbb"